00:00:00.001 Started by upstream project "autotest-per-patch" build number 126230 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.081 Using shallow fetch with depth 1 00:00:00.081 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.081 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.933 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.944 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.956 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.956 > git config core.sparsecheckout # timeout=10 00:00:02.966 > git read-tree -mu HEAD # timeout=10 00:00:02.981 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.001 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.001 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.100 [Pipeline] Start of Pipeline 00:00:03.114 [Pipeline] library 00:00:03.115 Loading library shm_lib@master 00:00:03.116 Library shm_lib@master is cached. Copying from home. 00:00:03.131 [Pipeline] node 00:00:03.146 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.148 [Pipeline] { 00:00:03.158 [Pipeline] catchError 00:00:03.159 [Pipeline] { 00:00:03.170 [Pipeline] wrap 00:00:03.178 [Pipeline] { 00:00:03.189 [Pipeline] stage 00:00:03.191 [Pipeline] { (Prologue) 00:00:03.388 [Pipeline] sh 00:00:03.668 + logger -p user.info -t JENKINS-CI 00:00:03.692 [Pipeline] echo 00:00:03.694 Node: WFP20 00:00:03.703 [Pipeline] sh 00:00:04.042 [Pipeline] setCustomBuildProperty 00:00:04.052 [Pipeline] echo 00:00:04.054 Cleanup processes 00:00:04.058 [Pipeline] sh 00:00:04.337 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.337 186971 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.354 [Pipeline] sh 00:00:04.637 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.637 ++ grep -v 'sudo pgrep' 00:00:04.637 ++ awk '{print $1}' 00:00:04.637 + sudo kill -9 00:00:04.637 + true 00:00:04.649 [Pipeline] cleanWs 00:00:04.657 [WS-CLEANUP] Deleting project workspace... 00:00:04.657 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.662 [WS-CLEANUP] done 00:00:04.665 [Pipeline] setCustomBuildProperty 00:00:04.677 [Pipeline] sh 00:00:04.954 + sudo git config --global --replace-all safe.directory '*' 00:00:05.037 [Pipeline] httpRequest 00:00:05.063 [Pipeline] echo 00:00:05.065 Sorcerer 10.211.164.101 is alive 00:00:05.075 [Pipeline] httpRequest 00:00:05.080 HttpMethod: GET 00:00:05.080 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.081 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.082 Response Code: HTTP/1.1 200 OK 00:00:05.083 Success: Status code 200 is in the accepted range: 200,404 00:00:05.083 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.972 [Pipeline] sh 00:00:06.249 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.264 [Pipeline] httpRequest 00:00:06.287 [Pipeline] echo 00:00:06.289 Sorcerer 10.211.164.101 is alive 00:00:06.297 [Pipeline] httpRequest 00:00:06.301 HttpMethod: GET 00:00:06.301 URL: http://10.211.164.101/packages/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:06.302 Sending request to url: http://10.211.164.101/packages/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:06.316 Response Code: HTTP/1.1 200 OK 00:00:06.317 Success: Status code 200 is in the accepted range: 200,404 00:00:06.317 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:36.714 [Pipeline] sh 00:00:36.995 + tar --no-same-owner -xf spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:39.539 [Pipeline] sh 00:00:39.825 + git -C spdk log --oneline -n5 00:00:39.825 6c0846996 module/bdev/nvme: add detach-monitor poller 00:00:39.825 70e80ba15 lib/nvme: add scan attached 00:00:39.825 455fda465 nvme_pci: ctrlr_scan_attached callback 00:00:39.825 a732bf2a5 nvme_transport: optional callback to scan attached 00:00:39.825 2728651ee accel: adjust task per ch define name 00:00:39.842 [Pipeline] } 00:00:39.864 [Pipeline] // stage 00:00:39.876 [Pipeline] stage 00:00:39.878 [Pipeline] { (Prepare) 00:00:39.898 [Pipeline] writeFile 00:00:39.917 [Pipeline] sh 00:00:40.230 + logger -p user.info -t JENKINS-CI 00:00:40.245 [Pipeline] sh 00:00:40.530 + logger -p user.info -t JENKINS-CI 00:00:40.544 [Pipeline] sh 00:00:40.828 + cat autorun-spdk.conf 00:00:40.828 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.828 SPDK_TEST_FUZZER_SHORT=1 00:00:40.828 SPDK_TEST_FUZZER=1 00:00:40.828 SPDK_RUN_UBSAN=1 00:00:40.836 RUN_NIGHTLY=0 00:00:40.842 [Pipeline] readFile 00:00:40.880 [Pipeline] withEnv 00:00:40.883 [Pipeline] { 00:00:40.900 [Pipeline] sh 00:00:41.186 + set -ex 00:00:41.186 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:41.186 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:41.186 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.186 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:41.186 ++ SPDK_TEST_FUZZER=1 00:00:41.186 ++ SPDK_RUN_UBSAN=1 00:00:41.186 ++ RUN_NIGHTLY=0 00:00:41.186 + case $SPDK_TEST_NVMF_NICS in 00:00:41.186 + DRIVERS= 00:00:41.186 + [[ -n '' ]] 00:00:41.186 + exit 0 00:00:41.194 [Pipeline] } 00:00:41.208 [Pipeline] // withEnv 00:00:41.212 [Pipeline] } 00:00:41.224 [Pipeline] // stage 00:00:41.232 [Pipeline] catchError 00:00:41.234 [Pipeline] { 00:00:41.247 [Pipeline] timeout 00:00:41.247 Timeout set to expire in 30 min 00:00:41.249 [Pipeline] { 00:00:41.264 [Pipeline] stage 00:00:41.266 [Pipeline] { (Tests) 00:00:41.279 [Pipeline] sh 00:00:41.558 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:41.558 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:41.558 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:41.558 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:41.559 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:41.559 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:41.559 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:41.559 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:41.559 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:41.559 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:41.559 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:41.559 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:41.559 + source /etc/os-release 00:00:41.559 ++ NAME='Fedora Linux' 00:00:41.559 ++ VERSION='38 (Cloud Edition)' 00:00:41.559 ++ ID=fedora 00:00:41.559 ++ VERSION_ID=38 00:00:41.559 ++ VERSION_CODENAME= 00:00:41.559 ++ PLATFORM_ID=platform:f38 00:00:41.559 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:41.559 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:41.559 ++ LOGO=fedora-logo-icon 00:00:41.559 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:41.559 ++ HOME_URL=https://fedoraproject.org/ 00:00:41.559 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:41.559 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:41.559 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:41.559 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:41.559 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:41.559 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:41.559 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:41.559 ++ SUPPORT_END=2024-05-14 00:00:41.559 ++ VARIANT='Cloud Edition' 00:00:41.559 ++ VARIANT_ID=cloud 00:00:41.559 + uname -a 00:00:41.559 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:41.559 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:44.096 Hugepages 00:00:44.096 node hugesize free / total 00:00:44.096 node0 1048576kB 0 / 0 00:00:44.096 node0 2048kB 0 / 0 00:00:44.354 node1 1048576kB 0 / 0 00:00:44.354 node1 2048kB 0 / 0 00:00:44.354 00:00:44.354 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:44.354 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:44.354 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:44.354 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:44.354 + rm -f /tmp/spdk-ld-path 00:00:44.354 + source autorun-spdk.conf 00:00:44.354 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.354 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:44.354 ++ SPDK_TEST_FUZZER=1 00:00:44.354 ++ SPDK_RUN_UBSAN=1 00:00:44.354 ++ RUN_NIGHTLY=0 00:00:44.354 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:44.354 + [[ -n '' ]] 00:00:44.354 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:44.354 + for M in /var/spdk/build-*-manifest.txt 00:00:44.354 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:44.354 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:44.354 + for M in /var/spdk/build-*-manifest.txt 00:00:44.354 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:44.354 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:44.354 ++ uname 00:00:44.354 + [[ Linux == \L\i\n\u\x ]] 00:00:44.354 + sudo dmesg -T 00:00:44.613 + sudo dmesg --clear 00:00:44.613 + dmesg_pid=187868 00:00:44.613 + [[ Fedora Linux == FreeBSD ]] 00:00:44.613 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.613 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.613 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:44.613 + [[ -x /usr/src/fio-static/fio ]] 00:00:44.613 + export FIO_BIN=/usr/src/fio-static/fio 00:00:44.613 + FIO_BIN=/usr/src/fio-static/fio 00:00:44.613 + sudo dmesg -Tw 00:00:44.613 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:44.613 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:44.613 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:44.613 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.613 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.613 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:44.613 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.613 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.613 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:44.613 Test configuration: 00:00:44.613 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.613 SPDK_TEST_FUZZER_SHORT=1 00:00:44.613 SPDK_TEST_FUZZER=1 00:00:44.613 SPDK_RUN_UBSAN=1 00:00:44.613 RUN_NIGHTLY=0 20:14:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:44.613 20:14:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:44.613 20:14:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:44.613 20:14:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:44.613 20:14:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.613 20:14:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.613 20:14:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.613 20:14:36 -- paths/export.sh@5 -- $ export PATH 00:00:44.613 20:14:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.613 20:14:36 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:44.613 20:14:36 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:44.613 20:14:36 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721067276.XXXXXX 00:00:44.613 20:14:36 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721067276.DnsO8V 00:00:44.613 20:14:36 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:44.613 20:14:36 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:44.613 20:14:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:44.613 20:14:36 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:44.613 20:14:36 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:44.613 20:14:36 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:44.613 20:14:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:44.613 20:14:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.613 20:14:36 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:44.613 20:14:36 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:44.613 20:14:36 -- pm/common@17 -- $ local monitor 00:00:44.613 20:14:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.613 20:14:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.613 20:14:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.613 20:14:36 -- pm/common@21 -- $ date +%s 00:00:44.613 20:14:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.613 20:14:36 -- pm/common@21 -- $ date +%s 00:00:44.613 20:14:36 -- pm/common@21 -- $ date +%s 00:00:44.613 20:14:36 -- pm/common@25 -- $ sleep 1 00:00:44.613 20:14:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067276 00:00:44.613 20:14:36 -- pm/common@21 -- $ date +%s 00:00:44.613 20:14:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067276 00:00:44.613 20:14:36 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067276 00:00:44.614 20:14:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067276 00:00:44.614 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067276_collect-cpu-temp.pm.log 00:00:44.872 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067276_collect-vmstat.pm.log 00:00:44.872 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067276_collect-cpu-load.pm.log 00:00:44.872 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067276_collect-bmc-pm.bmc.pm.log 00:00:45.810 20:14:37 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:45.810 20:14:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:45.810 20:14:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:45.810 20:14:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:45.810 20:14:37 -- spdk/autobuild.sh@16 -- $ date -u 00:00:45.810 Mon Jul 15 06:14:37 PM UTC 2024 00:00:45.810 20:14:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:45.810 v24.09-pre-210-g6c0846996 00:00:45.810 20:14:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:45.810 20:14:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:45.810 20:14:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:45.810 20:14:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:45.810 20:14:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:45.810 20:14:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.810 ************************************ 00:00:45.810 START TEST ubsan 00:00:45.810 ************************************ 00:00:45.810 20:14:38 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:45.810 using ubsan 00:00:45.810 00:00:45.810 real 0m0.000s 00:00:45.810 user 0m0.000s 00:00:45.810 sys 0m0.000s 00:00:45.810 20:14:38 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:45.810 20:14:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:45.810 ************************************ 00:00:45.810 END TEST ubsan 00:00:45.810 ************************************ 00:00:45.810 20:14:38 -- common/autotest_common.sh@1142 -- $ return 0 00:00:45.810 20:14:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:45.810 20:14:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:45.810 20:14:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:45.810 20:14:38 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:45.810 20:14:38 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:45.810 20:14:38 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:45.810 20:14:38 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:45.810 20:14:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:45.810 20:14:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.810 ************************************ 00:00:45.810 START TEST autobuild_llvm_precompile 00:00:45.810 ************************************ 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:45.810 Target: x86_64-redhat-linux-gnu 00:00:45.810 Thread model: posix 00:00:45.810 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:45.810 20:14:38 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:46.069 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:46.069 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:46.635 Using 'verbs' RDMA provider 00:01:02.460 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:17.349 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:17.349 Creating mk/config.mk...done. 00:01:17.349 Creating mk/cc.flags.mk...done. 00:01:17.349 Type 'make' to build. 00:01:17.349 00:01:17.349 real 0m29.500s 00:01:17.349 user 0m12.710s 00:01:17.349 sys 0m16.205s 00:01:17.349 20:15:07 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.349 20:15:07 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:17.349 ************************************ 00:01:17.349 END TEST autobuild_llvm_precompile 00:01:17.349 ************************************ 00:01:17.349 20:15:07 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.349 20:15:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.349 20:15:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.349 20:15:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.349 20:15:07 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:17.349 20:15:07 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:17.349 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:17.349 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:17.349 Using 'verbs' RDMA provider 00:01:29.559 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.768 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.768 Creating mk/config.mk...done. 00:01:41.768 Creating mk/cc.flags.mk...done. 00:01:41.768 Type 'make' to build. 00:01:41.768 20:15:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:41.768 20:15:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.768 20:15:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.768 20:15:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.768 ************************************ 00:01:41.768 START TEST make 00:01:41.768 ************************************ 00:01:41.768 20:15:32 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:41.768 make[1]: Nothing to be done for 'all'. 00:01:42.027 The Meson build system 00:01:42.027 Version: 1.3.1 00:01:42.027 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:42.027 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.027 Build type: native build 00:01:42.027 Project name: libvfio-user 00:01:42.027 Project version: 0.0.1 00:01:42.027 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:42.027 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:42.027 Host machine cpu family: x86_64 00:01:42.027 Host machine cpu: x86_64 00:01:42.027 Run-time dependency threads found: YES 00:01:42.027 Library dl found: YES 00:01:42.027 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.027 Run-time dependency json-c found: YES 0.17 00:01:42.027 Run-time dependency cmocka found: YES 1.1.7 00:01:42.027 Program pytest-3 found: NO 00:01:42.027 Program flake8 found: NO 00:01:42.027 Program misspell-fixer found: NO 00:01:42.027 Program restructuredtext-lint found: NO 00:01:42.027 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.027 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.027 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.027 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.027 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.028 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.028 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.028 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.028 Build targets in project: 8 00:01:42.028 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.028 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.028 00:01:42.028 libvfio-user 0.0.1 00:01:42.028 00:01:42.028 User defined options 00:01:42.028 buildtype : debug 00:01:42.028 default_library: static 00:01:42.028 libdir : /usr/local/lib 00:01:42.028 00:01:42.028 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.674 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.674 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.674 [2/36] Compiling C object samples/null.p/null.c.o 00:01:42.674 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:42.674 [4/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.674 [5/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.674 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:42.674 [7/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:42.674 [8/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:42.674 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:42.674 [10/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:42.674 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.674 [12/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:42.674 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:42.674 [14/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:42.674 [15/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:42.674 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:42.674 [17/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:42.674 [18/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:42.674 [19/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:42.674 [20/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:42.674 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:42.674 [22/36] Compiling C object samples/server.p/server.c.o 00:01:42.674 [23/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:42.674 [24/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:42.674 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:42.674 [26/36] Compiling C object samples/client.p/client.c.o 00:01:42.674 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:42.674 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:42.674 [29/36] Linking static target lib/libvfio-user.a 00:01:42.674 [30/36] Linking target samples/client 00:01:42.674 [31/36] Linking target test/unit_tests 00:01:42.674 [32/36] Linking target samples/lspci 00:01:42.974 [33/36] Linking target samples/server 00:01:42.974 [34/36] Linking target samples/null 00:01:42.974 [35/36] Linking target samples/gpio-pci-idio-16 00:01:42.974 [36/36] Linking target samples/shadow_ioeventfd_server 00:01:42.974 INFO: autodetecting backend as ninja 00:01:42.974 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.974 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.233 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.233 ninja: no work to do. 00:01:48.539 The Meson build system 00:01:48.539 Version: 1.3.1 00:01:48.539 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:48.539 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:48.539 Build type: native build 00:01:48.539 Program cat found: YES (/usr/bin/cat) 00:01:48.539 Project name: DPDK 00:01:48.539 Project version: 24.03.0 00:01:48.539 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:48.539 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:48.539 Host machine cpu family: x86_64 00:01:48.539 Host machine cpu: x86_64 00:01:48.539 Message: ## Building in Developer Mode ## 00:01:48.539 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.539 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.539 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.539 Program python3 found: YES (/usr/bin/python3) 00:01:48.539 Program cat found: YES (/usr/bin/cat) 00:01:48.539 Compiler for C supports arguments -march=native: YES 00:01:48.539 Checking for size of "void *" : 8 00:01:48.539 Checking for size of "void *" : 8 (cached) 00:01:48.539 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:48.539 Library m found: YES 00:01:48.539 Library numa found: YES 00:01:48.539 Has header "numaif.h" : YES 00:01:48.539 Library fdt found: NO 00:01:48.539 Library execinfo found: NO 00:01:48.539 Has header "execinfo.h" : YES 00:01:48.539 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.539 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.539 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.539 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.539 Run-time dependency openssl found: YES 3.0.9 00:01:48.539 Run-time dependency libpcap found: YES 1.10.4 00:01:48.539 Has header "pcap.h" with dependency libpcap: YES 00:01:48.539 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.539 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.539 Compiler for C supports arguments -Wformat: YES 00:01:48.539 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:48.539 Compiler for C supports arguments -Wformat-security: YES 00:01:48.539 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.539 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.539 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.539 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.539 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.539 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.539 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.539 Compiler for C supports arguments -Wundef: YES 00:01:48.539 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.539 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.539 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:48.539 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.539 Program objdump found: YES (/usr/bin/objdump) 00:01:48.539 Compiler for C supports arguments -mavx512f: YES 00:01:48.539 Checking if "AVX512 checking" compiles: YES 00:01:48.539 Fetching value of define "__SSE4_2__" : 1 00:01:48.539 Fetching value of define "__AES__" : 1 00:01:48.539 Fetching value of define "__AVX__" : 1 00:01:48.539 Fetching value of define "__AVX2__" : 1 00:01:48.539 Fetching value of define "__AVX512BW__" : 1 00:01:48.540 Fetching value of define "__AVX512CD__" : 1 00:01:48.540 Fetching value of define "__AVX512DQ__" : 1 00:01:48.540 Fetching value of define "__AVX512F__" : 1 00:01:48.540 Fetching value of define "__AVX512VL__" : 1 00:01:48.540 Fetching value of define "__PCLMUL__" : 1 00:01:48.540 Fetching value of define "__RDRND__" : 1 00:01:48.540 Fetching value of define "__RDSEED__" : 1 00:01:48.540 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.540 Fetching value of define "__znver1__" : (undefined) 00:01:48.540 Fetching value of define "__znver2__" : (undefined) 00:01:48.540 Fetching value of define "__znver3__" : (undefined) 00:01:48.540 Fetching value of define "__znver4__" : (undefined) 00:01:48.540 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:48.540 Message: lib/log: Defining dependency "log" 00:01:48.540 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.540 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.540 Checking for function "getentropy" : NO 00:01:48.540 Message: lib/eal: Defining dependency "eal" 00:01:48.540 Message: lib/ring: Defining dependency "ring" 00:01:48.540 Message: lib/rcu: Defining dependency "rcu" 00:01:48.540 Message: lib/mempool: Defining dependency "mempool" 00:01:48.540 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.540 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.540 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.540 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.540 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.540 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.540 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.540 Compiler for C supports arguments -mpclmul: YES 00:01:48.540 Compiler for C supports arguments -maes: YES 00:01:48.540 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.540 Compiler for C supports arguments -mavx512bw: YES 00:01:48.540 Compiler for C supports arguments -mavx512dq: YES 00:01:48.540 Compiler for C supports arguments -mavx512vl: YES 00:01:48.540 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.540 Compiler for C supports arguments -mavx2: YES 00:01:48.540 Compiler for C supports arguments -mavx: YES 00:01:48.540 Message: lib/net: Defining dependency "net" 00:01:48.540 Message: lib/meter: Defining dependency "meter" 00:01:48.540 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.540 Message: lib/pci: Defining dependency "pci" 00:01:48.540 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.540 Message: lib/hash: Defining dependency "hash" 00:01:48.540 Message: lib/timer: Defining dependency "timer" 00:01:48.540 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.540 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.540 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.540 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.540 Message: lib/power: Defining dependency "power" 00:01:48.540 Message: lib/reorder: Defining dependency "reorder" 00:01:48.540 Message: lib/security: Defining dependency "security" 00:01:48.540 Has header "linux/userfaultfd.h" : YES 00:01:48.540 Has header "linux/vduse.h" : YES 00:01:48.540 Message: lib/vhost: Defining dependency "vhost" 00:01:48.540 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:48.540 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.540 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.540 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.540 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.540 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.540 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.540 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.540 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.540 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.540 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.540 Configuring doxy-api-html.conf using configuration 00:01:48.540 Configuring doxy-api-man.conf using configuration 00:01:48.540 Program mandb found: YES (/usr/bin/mandb) 00:01:48.540 Program sphinx-build found: NO 00:01:48.540 Configuring rte_build_config.h using configuration 00:01:48.540 Message: 00:01:48.540 ================= 00:01:48.540 Applications Enabled 00:01:48.540 ================= 00:01:48.540 00:01:48.540 apps: 00:01:48.540 00:01:48.540 00:01:48.540 Message: 00:01:48.540 ================= 00:01:48.540 Libraries Enabled 00:01:48.540 ================= 00:01:48.540 00:01:48.540 libs: 00:01:48.540 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.540 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.540 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.540 00:01:48.540 Message: 00:01:48.540 =============== 00:01:48.540 Drivers Enabled 00:01:48.540 =============== 00:01:48.540 00:01:48.540 common: 00:01:48.540 00:01:48.540 bus: 00:01:48.540 pci, vdev, 00:01:48.540 mempool: 00:01:48.540 ring, 00:01:48.540 dma: 00:01:48.540 00:01:48.540 net: 00:01:48.540 00:01:48.540 crypto: 00:01:48.540 00:01:48.540 compress: 00:01:48.540 00:01:48.540 vdpa: 00:01:48.540 00:01:48.540 00:01:48.540 Message: 00:01:48.540 ================= 00:01:48.540 Content Skipped 00:01:48.540 ================= 00:01:48.540 00:01:48.540 apps: 00:01:48.540 dumpcap: explicitly disabled via build config 00:01:48.540 graph: explicitly disabled via build config 00:01:48.540 pdump: explicitly disabled via build config 00:01:48.540 proc-info: explicitly disabled via build config 00:01:48.540 test-acl: explicitly disabled via build config 00:01:48.540 test-bbdev: explicitly disabled via build config 00:01:48.540 test-cmdline: explicitly disabled via build config 00:01:48.540 test-compress-perf: explicitly disabled via build config 00:01:48.540 test-crypto-perf: explicitly disabled via build config 00:01:48.540 test-dma-perf: explicitly disabled via build config 00:01:48.540 test-eventdev: explicitly disabled via build config 00:01:48.540 test-fib: explicitly disabled via build config 00:01:48.540 test-flow-perf: explicitly disabled via build config 00:01:48.540 test-gpudev: explicitly disabled via build config 00:01:48.540 test-mldev: explicitly disabled via build config 00:01:48.540 test-pipeline: explicitly disabled via build config 00:01:48.540 test-pmd: explicitly disabled via build config 00:01:48.540 test-regex: explicitly disabled via build config 00:01:48.540 test-sad: explicitly disabled via build config 00:01:48.540 test-security-perf: explicitly disabled via build config 00:01:48.540 00:01:48.540 libs: 00:01:48.540 argparse: explicitly disabled via build config 00:01:48.540 metrics: explicitly disabled via build config 00:01:48.540 acl: explicitly disabled via build config 00:01:48.540 bbdev: explicitly disabled via build config 00:01:48.540 bitratestats: explicitly disabled via build config 00:01:48.540 bpf: explicitly disabled via build config 00:01:48.540 cfgfile: explicitly disabled via build config 00:01:48.540 distributor: explicitly disabled via build config 00:01:48.540 efd: explicitly disabled via build config 00:01:48.540 eventdev: explicitly disabled via build config 00:01:48.540 dispatcher: explicitly disabled via build config 00:01:48.540 gpudev: explicitly disabled via build config 00:01:48.540 gro: explicitly disabled via build config 00:01:48.540 gso: explicitly disabled via build config 00:01:48.540 ip_frag: explicitly disabled via build config 00:01:48.540 jobstats: explicitly disabled via build config 00:01:48.540 latencystats: explicitly disabled via build config 00:01:48.540 lpm: explicitly disabled via build config 00:01:48.540 member: explicitly disabled via build config 00:01:48.540 pcapng: explicitly disabled via build config 00:01:48.540 rawdev: explicitly disabled via build config 00:01:48.540 regexdev: explicitly disabled via build config 00:01:48.540 mldev: explicitly disabled via build config 00:01:48.540 rib: explicitly disabled via build config 00:01:48.540 sched: explicitly disabled via build config 00:01:48.540 stack: explicitly disabled via build config 00:01:48.540 ipsec: explicitly disabled via build config 00:01:48.540 pdcp: explicitly disabled via build config 00:01:48.540 fib: explicitly disabled via build config 00:01:48.540 port: explicitly disabled via build config 00:01:48.541 pdump: explicitly disabled via build config 00:01:48.541 table: explicitly disabled via build config 00:01:48.541 pipeline: explicitly disabled via build config 00:01:48.541 graph: explicitly disabled via build config 00:01:48.541 node: explicitly disabled via build config 00:01:48.541 00:01:48.541 drivers: 00:01:48.541 common/cpt: not in enabled drivers build config 00:01:48.541 common/dpaax: not in enabled drivers build config 00:01:48.541 common/iavf: not in enabled drivers build config 00:01:48.541 common/idpf: not in enabled drivers build config 00:01:48.541 common/ionic: not in enabled drivers build config 00:01:48.541 common/mvep: not in enabled drivers build config 00:01:48.541 common/octeontx: not in enabled drivers build config 00:01:48.541 bus/auxiliary: not in enabled drivers build config 00:01:48.541 bus/cdx: not in enabled drivers build config 00:01:48.541 bus/dpaa: not in enabled drivers build config 00:01:48.541 bus/fslmc: not in enabled drivers build config 00:01:48.541 bus/ifpga: not in enabled drivers build config 00:01:48.541 bus/platform: not in enabled drivers build config 00:01:48.541 bus/uacce: not in enabled drivers build config 00:01:48.541 bus/vmbus: not in enabled drivers build config 00:01:48.541 common/cnxk: not in enabled drivers build config 00:01:48.541 common/mlx5: not in enabled drivers build config 00:01:48.541 common/nfp: not in enabled drivers build config 00:01:48.541 common/nitrox: not in enabled drivers build config 00:01:48.541 common/qat: not in enabled drivers build config 00:01:48.541 common/sfc_efx: not in enabled drivers build config 00:01:48.541 mempool/bucket: not in enabled drivers build config 00:01:48.541 mempool/cnxk: not in enabled drivers build config 00:01:48.541 mempool/dpaa: not in enabled drivers build config 00:01:48.541 mempool/dpaa2: not in enabled drivers build config 00:01:48.541 mempool/octeontx: not in enabled drivers build config 00:01:48.541 mempool/stack: not in enabled drivers build config 00:01:48.541 dma/cnxk: not in enabled drivers build config 00:01:48.541 dma/dpaa: not in enabled drivers build config 00:01:48.541 dma/dpaa2: not in enabled drivers build config 00:01:48.541 dma/hisilicon: not in enabled drivers build config 00:01:48.541 dma/idxd: not in enabled drivers build config 00:01:48.541 dma/ioat: not in enabled drivers build config 00:01:48.541 dma/skeleton: not in enabled drivers build config 00:01:48.541 net/af_packet: not in enabled drivers build config 00:01:48.541 net/af_xdp: not in enabled drivers build config 00:01:48.541 net/ark: not in enabled drivers build config 00:01:48.541 net/atlantic: not in enabled drivers build config 00:01:48.541 net/avp: not in enabled drivers build config 00:01:48.541 net/axgbe: not in enabled drivers build config 00:01:48.541 net/bnx2x: not in enabled drivers build config 00:01:48.541 net/bnxt: not in enabled drivers build config 00:01:48.541 net/bonding: not in enabled drivers build config 00:01:48.541 net/cnxk: not in enabled drivers build config 00:01:48.541 net/cpfl: not in enabled drivers build config 00:01:48.541 net/cxgbe: not in enabled drivers build config 00:01:48.541 net/dpaa: not in enabled drivers build config 00:01:48.541 net/dpaa2: not in enabled drivers build config 00:01:48.541 net/e1000: not in enabled drivers build config 00:01:48.541 net/ena: not in enabled drivers build config 00:01:48.541 net/enetc: not in enabled drivers build config 00:01:48.541 net/enetfec: not in enabled drivers build config 00:01:48.541 net/enic: not in enabled drivers build config 00:01:48.541 net/failsafe: not in enabled drivers build config 00:01:48.541 net/fm10k: not in enabled drivers build config 00:01:48.541 net/gve: not in enabled drivers build config 00:01:48.541 net/hinic: not in enabled drivers build config 00:01:48.541 net/hns3: not in enabled drivers build config 00:01:48.541 net/i40e: not in enabled drivers build config 00:01:48.541 net/iavf: not in enabled drivers build config 00:01:48.541 net/ice: not in enabled drivers build config 00:01:48.541 net/idpf: not in enabled drivers build config 00:01:48.541 net/igc: not in enabled drivers build config 00:01:48.541 net/ionic: not in enabled drivers build config 00:01:48.541 net/ipn3ke: not in enabled drivers build config 00:01:48.541 net/ixgbe: not in enabled drivers build config 00:01:48.541 net/mana: not in enabled drivers build config 00:01:48.541 net/memif: not in enabled drivers build config 00:01:48.541 net/mlx4: not in enabled drivers build config 00:01:48.541 net/mlx5: not in enabled drivers build config 00:01:48.541 net/mvneta: not in enabled drivers build config 00:01:48.541 net/mvpp2: not in enabled drivers build config 00:01:48.541 net/netvsc: not in enabled drivers build config 00:01:48.541 net/nfb: not in enabled drivers build config 00:01:48.541 net/nfp: not in enabled drivers build config 00:01:48.541 net/ngbe: not in enabled drivers build config 00:01:48.541 net/null: not in enabled drivers build config 00:01:48.541 net/octeontx: not in enabled drivers build config 00:01:48.541 net/octeon_ep: not in enabled drivers build config 00:01:48.541 net/pcap: not in enabled drivers build config 00:01:48.541 net/pfe: not in enabled drivers build config 00:01:48.541 net/qede: not in enabled drivers build config 00:01:48.541 net/ring: not in enabled drivers build config 00:01:48.541 net/sfc: not in enabled drivers build config 00:01:48.541 net/softnic: not in enabled drivers build config 00:01:48.541 net/tap: not in enabled drivers build config 00:01:48.541 net/thunderx: not in enabled drivers build config 00:01:48.541 net/txgbe: not in enabled drivers build config 00:01:48.541 net/vdev_netvsc: not in enabled drivers build config 00:01:48.541 net/vhost: not in enabled drivers build config 00:01:48.541 net/virtio: not in enabled drivers build config 00:01:48.541 net/vmxnet3: not in enabled drivers build config 00:01:48.541 raw/*: missing internal dependency, "rawdev" 00:01:48.541 crypto/armv8: not in enabled drivers build config 00:01:48.541 crypto/bcmfs: not in enabled drivers build config 00:01:48.541 crypto/caam_jr: not in enabled drivers build config 00:01:48.541 crypto/ccp: not in enabled drivers build config 00:01:48.541 crypto/cnxk: not in enabled drivers build config 00:01:48.541 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.541 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.541 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.541 crypto/mlx5: not in enabled drivers build config 00:01:48.541 crypto/mvsam: not in enabled drivers build config 00:01:48.541 crypto/nitrox: not in enabled drivers build config 00:01:48.541 crypto/null: not in enabled drivers build config 00:01:48.541 crypto/octeontx: not in enabled drivers build config 00:01:48.541 crypto/openssl: not in enabled drivers build config 00:01:48.541 crypto/scheduler: not in enabled drivers build config 00:01:48.541 crypto/uadk: not in enabled drivers build config 00:01:48.541 crypto/virtio: not in enabled drivers build config 00:01:48.541 compress/isal: not in enabled drivers build config 00:01:48.541 compress/mlx5: not in enabled drivers build config 00:01:48.541 compress/nitrox: not in enabled drivers build config 00:01:48.541 compress/octeontx: not in enabled drivers build config 00:01:48.541 compress/zlib: not in enabled drivers build config 00:01:48.541 regex/*: missing internal dependency, "regexdev" 00:01:48.541 ml/*: missing internal dependency, "mldev" 00:01:48.541 vdpa/ifc: not in enabled drivers build config 00:01:48.541 vdpa/mlx5: not in enabled drivers build config 00:01:48.541 vdpa/nfp: not in enabled drivers build config 00:01:48.541 vdpa/sfc: not in enabled drivers build config 00:01:48.541 event/*: missing internal dependency, "eventdev" 00:01:48.541 baseband/*: missing internal dependency, "bbdev" 00:01:48.541 gpu/*: missing internal dependency, "gpudev" 00:01:48.541 00:01:48.541 00:01:48.541 Build targets in project: 85 00:01:48.541 00:01:48.541 DPDK 24.03.0 00:01:48.541 00:01:48.541 User defined options 00:01:48.541 buildtype : debug 00:01:48.541 default_library : static 00:01:48.541 libdir : lib 00:01:48.541 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:48.541 c_args : -fPIC -Werror 00:01:48.541 c_link_args : 00:01:48.541 cpu_instruction_set: native 00:01:48.541 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:48.541 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:48.541 enable_docs : false 00:01:48.541 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.541 enable_kmods : false 00:01:48.541 max_lcores : 128 00:01:48.541 tests : false 00:01:48.541 00:01:48.541 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.541 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:48.802 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.802 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.802 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.802 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.802 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.802 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.802 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.802 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.802 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.802 [10/268] Linking static target lib/librte_kvargs.a 00:01:48.802 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.802 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.802 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.802 [14/268] Linking static target lib/librte_log.a 00:01:48.802 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.802 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.802 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.802 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.802 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.802 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.802 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.802 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.802 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.802 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.802 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.802 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.802 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.802 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.802 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.802 [30/268] Linking static target lib/librte_pci.a 00:01:48.802 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.802 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.802 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.802 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.065 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.065 [36/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.065 [37/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.065 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.065 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.065 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.065 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.065 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.065 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.325 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.325 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.325 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.325 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.325 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.325 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.325 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.325 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.325 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:49.325 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.325 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.325 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.325 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.325 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.325 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.325 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:49.325 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.325 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.325 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.325 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.325 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.325 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:49.325 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:49.325 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.325 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.325 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.325 [70/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.325 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.325 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.325 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.325 [74/268] Linking static target lib/librte_telemetry.a 00:01:49.325 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.325 [76/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.325 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.325 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.325 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.325 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:49.325 [81/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.325 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.325 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:49.325 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.325 [85/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.325 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.325 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:49.325 [88/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.325 [89/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.325 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.325 [91/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.325 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.325 [93/268] Linking static target lib/librte_meter.a 00:01:49.325 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.325 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:49.325 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.325 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:49.325 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.325 [99/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.325 [100/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.325 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.325 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.325 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.325 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.325 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:49.325 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.325 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.325 [108/268] Linking static target lib/librte_ring.a 00:01:49.325 [109/268] Linking static target lib/librte_timer.a 00:01:49.325 [110/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.325 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.325 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.325 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.325 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.325 [115/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.325 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.325 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.325 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.326 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.326 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:49.326 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.326 [122/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.326 [123/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:49.326 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.326 [125/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:49.326 [126/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:49.326 [127/268] Linking static target lib/librte_cmdline.a 00:01:49.326 [128/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.326 [129/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.326 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.326 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.326 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.326 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.326 [134/268] Linking static target lib/librte_rcu.a 00:01:49.326 [135/268] Linking static target lib/librte_eal.a 00:01:49.326 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.326 [137/268] Linking target lib/librte_log.so.24.1 00:01:49.326 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.326 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.326 [140/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.326 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.326 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.326 [143/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.326 [144/268] Linking static target lib/librte_net.a 00:01:49.326 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.326 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.326 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.326 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.585 [149/268] Linking static target lib/librte_dmadev.a 00:01:49.585 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:49.585 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.585 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.585 [153/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.585 [154/268] Linking static target lib/librte_mbuf.a 00:01:49.585 [155/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.585 [156/268] Linking static target lib/librte_compressdev.a 00:01:49.585 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.585 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.585 [159/268] Linking static target lib/librte_mempool.a 00:01:49.585 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.585 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.585 [162/268] Linking target lib/librte_kvargs.so.24.1 00:01:49.585 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:49.585 [164/268] Linking static target lib/librte_hash.a 00:01:49.585 [165/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.585 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:49.585 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.585 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:49.585 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:49.585 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:49.585 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.585 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:49.585 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.585 [174/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:49.585 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:49.585 [176/268] Linking static target lib/librte_power.a 00:01:49.585 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:49.585 [178/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.585 [179/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:49.585 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:49.585 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.585 [182/268] Linking static target lib/librte_reorder.a 00:01:49.585 [183/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.585 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.585 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:49.585 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.585 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:49.585 [188/268] Linking static target lib/librte_cryptodev.a 00:01:49.585 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.585 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.585 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.845 [192/268] Linking static target lib/librte_security.a 00:01:49.845 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.845 [194/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.845 [195/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.845 [196/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.845 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.845 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:49.845 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.845 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.845 [201/268] Linking target lib/librte_telemetry.so.24.1 00:01:49.845 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.845 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.845 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.845 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.845 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:49.845 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.845 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.845 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.845 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.845 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:49.845 [212/268] Linking static target drivers/librte_mempool_ring.a 00:01:50.104 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:50.104 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.104 [215/268] Linking static target lib/librte_ethdev.a 00:01:50.104 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.104 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.104 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.104 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.363 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.363 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.363 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.621 [223/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.621 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.621 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.621 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.621 [227/268] Linking static target lib/librte_vhost.a 00:01:50.621 [228/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.881 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.822 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.759 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.899 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.276 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.276 [234/268] Linking target lib/librte_eal.so.24.1 00:02:02.534 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:02.534 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:02.534 [237/268] Linking target lib/librte_meter.so.24.1 00:02:02.534 [238/268] Linking target lib/librte_ring.so.24.1 00:02:02.534 [239/268] Linking target lib/librte_pci.so.24.1 00:02:02.534 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:02.534 [241/268] Linking target lib/librte_timer.so.24.1 00:02:02.793 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:02.793 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:02.793 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:02.793 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:02.793 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:02.793 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:02.793 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:02.793 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:02.793 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.793 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:03.051 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:03.051 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:03.051 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.051 [255/268] Linking target lib/librte_net.so.24.1 00:02:03.051 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:03.051 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:03.051 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:03.310 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.310 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.310 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:03.310 [262/268] Linking target lib/librte_hash.so.24.1 00:02:03.310 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:03.310 [264/268] Linking target lib/librte_security.so.24.1 00:02:03.310 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:03.568 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:03.568 [267/268] Linking target lib/librte_vhost.so.24.1 00:02:03.568 [268/268] Linking target lib/librte_power.so.24.1 00:02:03.568 INFO: autodetecting backend as ninja 00:02:03.568 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:04.503 CC lib/log/log.o 00:02:04.503 CC lib/ut_mock/mock.o 00:02:04.503 CC lib/log/log_flags.o 00:02:04.503 CC lib/log/log_deprecated.o 00:02:04.503 CC lib/ut/ut.o 00:02:04.762 LIB libspdk_log.a 00:02:04.762 LIB libspdk_ut_mock.a 00:02:04.762 LIB libspdk_ut.a 00:02:05.021 CC lib/ioat/ioat.o 00:02:05.021 CC lib/dma/dma.o 00:02:05.021 CC lib/util/base64.o 00:02:05.021 CC lib/util/crc16.o 00:02:05.021 CC lib/util/bit_array.o 00:02:05.021 CC lib/util/cpuset.o 00:02:05.021 CC lib/util/crc32.o 00:02:05.021 CC lib/util/crc32c.o 00:02:05.021 CC lib/util/crc32_ieee.o 00:02:05.021 CC lib/util/crc64.o 00:02:05.021 CC lib/util/dif.o 00:02:05.021 CC lib/util/hexlify.o 00:02:05.021 CC lib/util/fd.o 00:02:05.021 CC lib/util/file.o 00:02:05.021 CC lib/util/iov.o 00:02:05.021 CC lib/util/math.o 00:02:05.021 CC lib/util/pipe.o 00:02:05.021 CC lib/util/strerror_tls.o 00:02:05.021 CC lib/util/string.o 00:02:05.021 CC lib/util/uuid.o 00:02:05.021 CC lib/util/fd_group.o 00:02:05.021 CC lib/util/xor.o 00:02:05.021 CC lib/util/zipf.o 00:02:05.021 CXX lib/trace_parser/trace.o 00:02:05.021 LIB libspdk_dma.a 00:02:05.021 CC lib/vfio_user/host/vfio_user.o 00:02:05.021 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.021 LIB libspdk_ioat.a 00:02:05.280 LIB libspdk_vfio_user.a 00:02:05.280 LIB libspdk_util.a 00:02:05.540 LIB libspdk_trace_parser.a 00:02:05.540 CC lib/rdma_utils/rdma_utils.o 00:02:05.540 CC lib/conf/conf.o 00:02:05.540 CC lib/env_dpdk/env.o 00:02:05.540 CC lib/env_dpdk/memory.o 00:02:05.540 CC lib/env_dpdk/pci.o 00:02:05.540 CC lib/env_dpdk/pci_ioat.o 00:02:05.540 CC lib/env_dpdk/init.o 00:02:05.540 CC lib/vmd/vmd.o 00:02:05.540 CC lib/env_dpdk/threads.o 00:02:05.540 CC lib/vmd/led.o 00:02:05.540 CC lib/env_dpdk/pci_idxd.o 00:02:05.540 CC lib/env_dpdk/pci_virtio.o 00:02:05.540 CC lib/env_dpdk/pci_event.o 00:02:05.540 CC lib/env_dpdk/pci_vmd.o 00:02:05.540 CC lib/env_dpdk/sigbus_handler.o 00:02:05.540 CC lib/env_dpdk/pci_dpdk.o 00:02:05.540 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.540 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.540 CC lib/json/json_parse.o 00:02:05.540 CC lib/json/json_util.o 00:02:05.540 CC lib/json/json_write.o 00:02:05.540 CC lib/rdma_provider/common.o 00:02:05.540 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:05.540 CC lib/idxd/idxd.o 00:02:05.540 CC lib/idxd/idxd_user.o 00:02:05.540 CC lib/idxd/idxd_kernel.o 00:02:05.800 LIB libspdk_conf.a 00:02:05.800 LIB libspdk_rdma_provider.a 00:02:05.800 LIB libspdk_rdma_utils.a 00:02:05.800 LIB libspdk_json.a 00:02:05.800 LIB libspdk_idxd.a 00:02:06.059 LIB libspdk_vmd.a 00:02:06.059 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.059 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.059 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.059 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.318 LIB libspdk_jsonrpc.a 00:02:06.577 LIB libspdk_env_dpdk.a 00:02:06.577 CC lib/rpc/rpc.o 00:02:06.835 LIB libspdk_rpc.a 00:02:07.094 CC lib/keyring/keyring.o 00:02:07.094 CC lib/keyring/keyring_rpc.o 00:02:07.094 CC lib/notify/notify.o 00:02:07.094 CC lib/trace/trace_flags.o 00:02:07.094 CC lib/trace/trace.o 00:02:07.094 CC lib/trace/trace_rpc.o 00:02:07.094 CC lib/notify/notify_rpc.o 00:02:07.353 LIB libspdk_notify.a 00:02:07.353 LIB libspdk_keyring.a 00:02:07.353 LIB libspdk_trace.a 00:02:07.611 CC lib/thread/thread.o 00:02:07.611 CC lib/thread/iobuf.o 00:02:07.611 CC lib/sock/sock.o 00:02:07.611 CC lib/sock/sock_rpc.o 00:02:07.870 LIB libspdk_sock.a 00:02:08.128 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.128 CC lib/nvme/nvme_fabric.o 00:02:08.128 CC lib/nvme/nvme_ns_cmd.o 00:02:08.128 CC lib/nvme/nvme_ctrlr.o 00:02:08.128 CC lib/nvme/nvme_ns.o 00:02:08.128 CC lib/nvme/nvme_pcie_common.o 00:02:08.128 CC lib/nvme/nvme.o 00:02:08.128 CC lib/nvme/nvme_pcie.o 00:02:08.128 CC lib/nvme/nvme_quirks.o 00:02:08.128 CC lib/nvme/nvme_qpair.o 00:02:08.128 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.128 CC lib/nvme/nvme_transport.o 00:02:08.128 CC lib/nvme/nvme_discovery.o 00:02:08.128 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.128 CC lib/nvme/nvme_tcp.o 00:02:08.128 CC lib/nvme/nvme_opal.o 00:02:08.128 CC lib/nvme/nvme_io_msg.o 00:02:08.128 CC lib/nvme/nvme_poll_group.o 00:02:08.128 CC lib/nvme/nvme_zns.o 00:02:08.128 CC lib/nvme/nvme_stubs.o 00:02:08.128 CC lib/nvme/nvme_auth.o 00:02:08.128 CC lib/nvme/nvme_cuse.o 00:02:08.128 CC lib/nvme/nvme_vfio_user.o 00:02:08.128 CC lib/nvme/nvme_rdma.o 00:02:08.387 LIB libspdk_thread.a 00:02:08.647 CC lib/init/subsystem_rpc.o 00:02:08.647 CC lib/init/json_config.o 00:02:08.647 CC lib/init/subsystem.o 00:02:08.647 CC lib/init/rpc.o 00:02:08.647 CC lib/virtio/virtio_vhost_user.o 00:02:08.647 CC lib/accel/accel.o 00:02:08.647 CC lib/virtio/virtio.o 00:02:08.647 CC lib/accel/accel_rpc.o 00:02:08.647 CC lib/virtio/virtio_vfio_user.o 00:02:08.647 CC lib/accel/accel_sw.o 00:02:08.647 CC lib/blob/blobstore.o 00:02:08.647 CC lib/virtio/virtio_pci.o 00:02:08.647 CC lib/blob/request.o 00:02:08.647 CC lib/blob/blob_bs_dev.o 00:02:08.647 CC lib/blob/zeroes.o 00:02:08.647 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.647 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.905 LIB libspdk_init.a 00:02:08.905 LIB libspdk_virtio.a 00:02:08.905 LIB libspdk_vfu_tgt.a 00:02:09.164 CC lib/event/app.o 00:02:09.164 CC lib/event/reactor.o 00:02:09.164 CC lib/event/app_rpc.o 00:02:09.164 CC lib/event/log_rpc.o 00:02:09.164 CC lib/event/scheduler_static.o 00:02:09.164 LIB libspdk_accel.a 00:02:09.422 LIB libspdk_event.a 00:02:09.422 LIB libspdk_nvme.a 00:02:09.681 CC lib/bdev/bdev.o 00:02:09.681 CC lib/bdev/bdev_rpc.o 00:02:09.681 CC lib/bdev/part.o 00:02:09.681 CC lib/bdev/bdev_zone.o 00:02:09.681 CC lib/bdev/scsi_nvme.o 00:02:10.249 LIB libspdk_blob.a 00:02:10.508 CC lib/lvol/lvol.o 00:02:10.767 CC lib/blobfs/blobfs.o 00:02:10.767 CC lib/blobfs/tree.o 00:02:11.035 LIB libspdk_lvol.a 00:02:11.293 LIB libspdk_blobfs.a 00:02:11.293 LIB libspdk_bdev.a 00:02:11.552 CC lib/ftl/ftl_core.o 00:02:11.552 CC lib/ftl/ftl_init.o 00:02:11.552 CC lib/ftl/ftl_layout.o 00:02:11.552 CC lib/ftl/ftl_debug.o 00:02:11.552 CC lib/ftl/ftl_io.o 00:02:11.552 CC lib/ftl/ftl_sb.o 00:02:11.552 CC lib/ftl/ftl_l2p_flat.o 00:02:11.552 CC lib/ftl/ftl_l2p.o 00:02:11.552 CC lib/ftl/ftl_band_ops.o 00:02:11.552 CC lib/ftl/ftl_nv_cache.o 00:02:11.552 CC lib/ftl/ftl_band.o 00:02:11.552 CC lib/ftl/ftl_writer.o 00:02:11.552 CC lib/ftl/ftl_rq.o 00:02:11.552 CC lib/ftl/ftl_reloc.o 00:02:11.552 CC lib/ftl/ftl_l2p_cache.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.552 CC lib/ftl/ftl_p2l.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.552 CC lib/ublk/ublk_rpc.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.552 CC lib/ublk/ublk.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.552 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.552 CC lib/ftl/utils/ftl_conf.o 00:02:11.552 CC lib/nvmf/ctrlr.o 00:02:11.552 CC lib/ftl/utils/ftl_md.o 00:02:11.552 CC lib/ftl/utils/ftl_mempool.o 00:02:11.552 CC lib/ftl/utils/ftl_property.o 00:02:11.552 CC lib/nvmf/ctrlr_discovery.o 00:02:11.552 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.552 CC lib/nvmf/ctrlr_bdev.o 00:02:11.552 CC lib/scsi/dev.o 00:02:11.552 CC lib/scsi/lun.o 00:02:11.552 CC lib/nvmf/subsystem.o 00:02:11.552 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.552 CC lib/nvmf/nvmf.o 00:02:11.552 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.552 CC lib/scsi/port.o 00:02:11.552 CC lib/nvmf/nvmf_rpc.o 00:02:11.552 CC lib/nbd/nbd.o 00:02:11.552 CC lib/scsi/scsi.o 00:02:11.552 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.552 CC lib/nbd/nbd_rpc.o 00:02:11.552 CC lib/scsi/scsi_bdev.o 00:02:11.552 CC lib/scsi/scsi_pr.o 00:02:11.552 CC lib/nvmf/tcp.o 00:02:11.552 CC lib/nvmf/transport.o 00:02:11.552 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.552 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.552 CC lib/scsi/scsi_rpc.o 00:02:11.552 CC lib/nvmf/stubs.o 00:02:11.552 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.552 CC lib/nvmf/rdma.o 00:02:11.552 CC lib/scsi/task.o 00:02:11.552 CC lib/nvmf/mdns_server.o 00:02:11.552 CC lib/nvmf/vfio_user.o 00:02:11.552 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.552 CC lib/nvmf/auth.o 00:02:11.552 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.552 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.552 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.552 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.552 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.552 CC lib/ftl/base/ftl_base_dev.o 00:02:11.552 CC lib/ftl/ftl_trace.o 00:02:11.895 LIB libspdk_nbd.a 00:02:12.158 LIB libspdk_scsi.a 00:02:12.158 LIB libspdk_ublk.a 00:02:12.158 LIB libspdk_ftl.a 00:02:12.416 CC lib/iscsi/iscsi.o 00:02:12.416 CC lib/iscsi/conn.o 00:02:12.416 CC lib/iscsi/md5.o 00:02:12.416 CC lib/iscsi/init_grp.o 00:02:12.416 CC lib/iscsi/portal_grp.o 00:02:12.416 CC lib/iscsi/param.o 00:02:12.416 CC lib/iscsi/iscsi_subsystem.o 00:02:12.416 CC lib/iscsi/tgt_node.o 00:02:12.416 CC lib/iscsi/iscsi_rpc.o 00:02:12.416 CC lib/iscsi/task.o 00:02:12.416 CC lib/vhost/vhost.o 00:02:12.416 CC lib/vhost/vhost_rpc.o 00:02:12.416 CC lib/vhost/vhost_scsi.o 00:02:12.416 CC lib/vhost/vhost_blk.o 00:02:12.416 CC lib/vhost/rte_vhost_user.o 00:02:12.984 LIB libspdk_nvmf.a 00:02:12.984 LIB libspdk_vhost.a 00:02:13.243 LIB libspdk_iscsi.a 00:02:13.502 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.502 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.502 CC module/vfu_device/vfu_virtio.o 00:02:13.502 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.502 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.762 CC module/blob/bdev/blob_bdev.o 00:02:13.762 CC module/accel/dsa/accel_dsa.o 00:02:13.762 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.762 CC module/accel/error/accel_error.o 00:02:13.762 CC module/accel/iaa/accel_iaa.o 00:02:13.762 CC module/accel/error/accel_error_rpc.o 00:02:13.762 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.762 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.762 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.762 CC module/sock/posix/posix.o 00:02:13.762 CC module/accel/ioat/accel_ioat.o 00:02:13.762 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.762 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.762 LIB libspdk_env_dpdk_rpc.a 00:02:13.762 CC module/keyring/file/keyring.o 00:02:13.762 CC module/keyring/file/keyring_rpc.o 00:02:13.762 CC module/keyring/linux/keyring.o 00:02:13.762 CC module/keyring/linux/keyring_rpc.o 00:02:13.762 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.762 LIB libspdk_accel_error.a 00:02:13.762 LIB libspdk_scheduler_gscheduler.a 00:02:13.762 LIB libspdk_keyring_file.a 00:02:13.762 LIB libspdk_accel_iaa.a 00:02:13.762 LIB libspdk_blob_bdev.a 00:02:13.762 LIB libspdk_keyring_linux.a 00:02:13.762 LIB libspdk_scheduler_dynamic.a 00:02:13.762 LIB libspdk_accel_ioat.a 00:02:13.762 LIB libspdk_accel_dsa.a 00:02:14.021 LIB libspdk_vfu_device.a 00:02:14.280 LIB libspdk_sock_posix.a 00:02:14.280 CC module/bdev/error/vbdev_error.o 00:02:14.280 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.280 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.280 CC module/bdev/gpt/gpt.o 00:02:14.280 CC module/bdev/malloc/bdev_malloc.o 00:02:14.280 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.280 CC module/bdev/null/bdev_null.o 00:02:14.280 CC module/bdev/null/bdev_null_rpc.o 00:02:14.280 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.280 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.280 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.280 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.280 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.280 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.280 CC module/bdev/raid/bdev_raid.o 00:02:14.280 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.280 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.280 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.280 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.280 CC module/bdev/raid/raid0.o 00:02:14.280 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.280 CC module/bdev/raid/concat.o 00:02:14.280 CC module/bdev/ftl/bdev_ftl.o 00:02:14.280 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.280 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.280 CC module/bdev/raid/raid1.o 00:02:14.280 CC module/bdev/delay/vbdev_delay.o 00:02:14.280 CC module/bdev/aio/bdev_aio.o 00:02:14.280 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.280 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.280 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.280 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.280 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.280 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.280 CC module/bdev/nvme/bdev_nvme.o 00:02:14.280 CC module/bdev/split/vbdev_split.o 00:02:14.280 CC module/bdev/nvme/nvme_rpc.o 00:02:14.280 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.280 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.280 CC module/bdev/nvme/vbdev_opal.o 00:02:14.280 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.280 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.538 LIB libspdk_blobfs_bdev.a 00:02:14.538 LIB libspdk_bdev_gpt.a 00:02:14.538 LIB libspdk_bdev_error.a 00:02:14.538 LIB libspdk_bdev_null.a 00:02:14.538 LIB libspdk_bdev_split.a 00:02:14.538 LIB libspdk_bdev_passthru.a 00:02:14.538 LIB libspdk_bdev_ftl.a 00:02:14.538 LIB libspdk_bdev_zone_block.a 00:02:14.538 LIB libspdk_bdev_iscsi.a 00:02:14.538 LIB libspdk_bdev_aio.a 00:02:14.538 LIB libspdk_bdev_malloc.a 00:02:14.538 LIB libspdk_bdev_delay.a 00:02:14.538 LIB libspdk_bdev_lvol.a 00:02:14.795 LIB libspdk_bdev_virtio.a 00:02:14.795 LIB libspdk_bdev_raid.a 00:02:15.732 LIB libspdk_bdev_nvme.a 00:02:15.991 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.991 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.991 CC module/event/subsystems/sock/sock.o 00:02:15.991 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.991 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:16.249 CC module/event/subsystems/vmd/vmd.o 00:02:16.249 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.249 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.249 CC module/event/subsystems/keyring/keyring.o 00:02:16.249 LIB libspdk_event_iobuf.a 00:02:16.249 LIB libspdk_event_sock.a 00:02:16.249 LIB libspdk_event_scheduler.a 00:02:16.249 LIB libspdk_event_vfu_tgt.a 00:02:16.249 LIB libspdk_event_vhost_blk.a 00:02:16.249 LIB libspdk_event_keyring.a 00:02:16.249 LIB libspdk_event_vmd.a 00:02:16.508 CC module/event/subsystems/accel/accel.o 00:02:16.508 LIB libspdk_event_accel.a 00:02:17.075 CC module/event/subsystems/bdev/bdev.o 00:02:17.075 LIB libspdk_event_bdev.a 00:02:17.333 CC module/event/subsystems/scsi/scsi.o 00:02:17.333 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.333 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.333 CC module/event/subsystems/nbd/nbd.o 00:02:17.333 CC module/event/subsystems/ublk/ublk.o 00:02:17.591 LIB libspdk_event_scsi.a 00:02:17.591 LIB libspdk_event_nbd.a 00:02:17.591 LIB libspdk_event_ublk.a 00:02:17.591 LIB libspdk_event_nvmf.a 00:02:17.850 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.850 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.850 LIB libspdk_event_vhost_scsi.a 00:02:17.850 LIB libspdk_event_iscsi.a 00:02:18.426 CXX app/trace/trace.o 00:02:18.426 CC app/spdk_top/spdk_top.o 00:02:18.426 CC app/spdk_nvme_identify/identify.o 00:02:18.426 CC app/spdk_nvme_perf/perf.o 00:02:18.426 CC app/trace_record/trace_record.o 00:02:18.426 CC test/rpc_client/rpc_client_test.o 00:02:18.426 TEST_HEADER include/spdk/accel.h 00:02:18.426 TEST_HEADER include/spdk/assert.h 00:02:18.426 TEST_HEADER include/spdk/barrier.h 00:02:18.426 TEST_HEADER include/spdk/base64.h 00:02:18.426 TEST_HEADER include/spdk/accel_module.h 00:02:18.426 CC app/spdk_lspci/spdk_lspci.o 00:02:18.426 TEST_HEADER include/spdk/bit_array.h 00:02:18.426 TEST_HEADER include/spdk/bdev.h 00:02:18.426 TEST_HEADER include/spdk/bdev_module.h 00:02:18.426 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.426 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.426 TEST_HEADER include/spdk/bit_pool.h 00:02:18.426 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.426 TEST_HEADER include/spdk/blobfs.h 00:02:18.426 TEST_HEADER include/spdk/blob.h 00:02:18.426 TEST_HEADER include/spdk/conf.h 00:02:18.426 TEST_HEADER include/spdk/config.h 00:02:18.426 TEST_HEADER include/spdk/cpuset.h 00:02:18.426 TEST_HEADER include/spdk/crc16.h 00:02:18.426 TEST_HEADER include/spdk/crc32.h 00:02:18.426 TEST_HEADER include/spdk/dma.h 00:02:18.426 TEST_HEADER include/spdk/crc64.h 00:02:18.426 TEST_HEADER include/spdk/dif.h 00:02:18.426 TEST_HEADER include/spdk/env.h 00:02:18.426 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.426 TEST_HEADER include/spdk/endian.h 00:02:18.426 TEST_HEADER include/spdk/event.h 00:02:18.426 TEST_HEADER include/spdk/fd_group.h 00:02:18.426 TEST_HEADER include/spdk/file.h 00:02:18.426 TEST_HEADER include/spdk/fd.h 00:02:18.426 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.426 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.426 TEST_HEADER include/spdk/ftl.h 00:02:18.426 TEST_HEADER include/spdk/hexlify.h 00:02:18.426 TEST_HEADER include/spdk/histogram_data.h 00:02:18.427 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.427 TEST_HEADER include/spdk/idxd.h 00:02:18.427 TEST_HEADER include/spdk/init.h 00:02:18.427 TEST_HEADER include/spdk/ioat.h 00:02:18.427 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.427 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.427 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.427 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.427 TEST_HEADER include/spdk/json.h 00:02:18.427 TEST_HEADER include/spdk/keyring_module.h 00:02:18.427 TEST_HEADER include/spdk/keyring.h 00:02:18.427 TEST_HEADER include/spdk/log.h 00:02:18.427 CC app/nvmf_tgt/nvmf_main.o 00:02:18.427 TEST_HEADER include/spdk/likely.h 00:02:18.427 TEST_HEADER include/spdk/mmio.h 00:02:18.427 TEST_HEADER include/spdk/lvol.h 00:02:18.427 TEST_HEADER include/spdk/memory.h 00:02:18.427 TEST_HEADER include/spdk/notify.h 00:02:18.427 TEST_HEADER include/spdk/nvme.h 00:02:18.427 TEST_HEADER include/spdk/nbd.h 00:02:18.427 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.427 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.427 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.427 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.427 TEST_HEADER include/spdk/nvmf.h 00:02:18.427 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.427 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.427 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.427 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.427 TEST_HEADER include/spdk/opal.h 00:02:18.427 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.427 TEST_HEADER include/spdk/opal_spec.h 00:02:18.427 TEST_HEADER include/spdk/reduce.h 00:02:18.427 TEST_HEADER include/spdk/pci_ids.h 00:02:18.427 TEST_HEADER include/spdk/pipe.h 00:02:18.427 TEST_HEADER include/spdk/rpc.h 00:02:18.427 TEST_HEADER include/spdk/queue.h 00:02:18.427 TEST_HEADER include/spdk/scheduler.h 00:02:18.427 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.427 TEST_HEADER include/spdk/scsi.h 00:02:18.427 TEST_HEADER include/spdk/sock.h 00:02:18.427 TEST_HEADER include/spdk/stdinc.h 00:02:18.427 TEST_HEADER include/spdk/string.h 00:02:18.427 TEST_HEADER include/spdk/thread.h 00:02:18.427 TEST_HEADER include/spdk/trace.h 00:02:18.427 TEST_HEADER include/spdk/trace_parser.h 00:02:18.427 TEST_HEADER include/spdk/tree.h 00:02:18.427 CC app/spdk_dd/spdk_dd.o 00:02:18.427 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.427 TEST_HEADER include/spdk/ublk.h 00:02:18.427 TEST_HEADER include/spdk/util.h 00:02:18.427 TEST_HEADER include/spdk/uuid.h 00:02:18.427 TEST_HEADER include/spdk/version.h 00:02:18.427 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.427 TEST_HEADER include/spdk/vhost.h 00:02:18.427 TEST_HEADER include/spdk/vmd.h 00:02:18.427 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.427 CC app/spdk_tgt/spdk_tgt.o 00:02:18.427 TEST_HEADER include/spdk/xor.h 00:02:18.427 CXX test/cpp_headers/accel_module.o 00:02:18.427 TEST_HEADER include/spdk/zipf.h 00:02:18.427 CXX test/cpp_headers/accel.o 00:02:18.427 CXX test/cpp_headers/base64.o 00:02:18.427 CXX test/cpp_headers/assert.o 00:02:18.427 CXX test/cpp_headers/bdev.o 00:02:18.427 CXX test/cpp_headers/bdev_module.o 00:02:18.427 CXX test/cpp_headers/barrier.o 00:02:18.427 CXX test/cpp_headers/bdev_zone.o 00:02:18.427 CXX test/cpp_headers/bit_pool.o 00:02:18.427 CXX test/cpp_headers/bit_array.o 00:02:18.427 CXX test/cpp_headers/blobfs.o 00:02:18.427 CXX test/cpp_headers/blob_bdev.o 00:02:18.427 CXX test/cpp_headers/conf.o 00:02:18.427 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.427 CXX test/cpp_headers/config.o 00:02:18.427 CXX test/cpp_headers/blob.o 00:02:18.427 CXX test/cpp_headers/cpuset.o 00:02:18.427 CXX test/cpp_headers/crc32.o 00:02:18.427 CXX test/cpp_headers/crc64.o 00:02:18.427 CXX test/cpp_headers/crc16.o 00:02:18.427 CXX test/cpp_headers/dma.o 00:02:18.427 CXX test/cpp_headers/dif.o 00:02:18.427 CXX test/cpp_headers/endian.o 00:02:18.427 CXX test/cpp_headers/env_dpdk.o 00:02:18.427 CXX test/cpp_headers/event.o 00:02:18.427 CXX test/cpp_headers/env.o 00:02:18.427 CXX test/cpp_headers/fd_group.o 00:02:18.427 CXX test/cpp_headers/fd.o 00:02:18.427 CXX test/cpp_headers/file.o 00:02:18.427 CXX test/cpp_headers/ftl.o 00:02:18.427 CXX test/cpp_headers/hexlify.o 00:02:18.427 CXX test/cpp_headers/gpt_spec.o 00:02:18.427 CXX test/cpp_headers/idxd.o 00:02:18.427 CXX test/cpp_headers/histogram_data.o 00:02:18.427 CXX test/cpp_headers/idxd_spec.o 00:02:18.427 CXX test/cpp_headers/ioat.o 00:02:18.427 CXX test/cpp_headers/init.o 00:02:18.427 CXX test/cpp_headers/ioat_spec.o 00:02:18.427 CXX test/cpp_headers/iscsi_spec.o 00:02:18.427 CXX test/cpp_headers/keyring.o 00:02:18.427 CXX test/cpp_headers/json.o 00:02:18.427 CXX test/cpp_headers/jsonrpc.o 00:02:18.427 CXX test/cpp_headers/keyring_module.o 00:02:18.427 CXX test/cpp_headers/likely.o 00:02:18.427 CXX test/cpp_headers/log.o 00:02:18.427 CXX test/cpp_headers/lvol.o 00:02:18.427 CXX test/cpp_headers/memory.o 00:02:18.427 CXX test/cpp_headers/mmio.o 00:02:18.427 CXX test/cpp_headers/nbd.o 00:02:18.427 CXX test/cpp_headers/notify.o 00:02:18.427 CXX test/cpp_headers/nvme.o 00:02:18.427 CXX test/cpp_headers/nvme_intel.o 00:02:18.427 CXX test/cpp_headers/nvme_ocssd.o 00:02:18.427 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:18.427 CXX test/cpp_headers/nvme_spec.o 00:02:18.427 CXX test/cpp_headers/nvme_zns.o 00:02:18.427 CXX test/cpp_headers/nvmf_cmd.o 00:02:18.427 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:18.427 CXX test/cpp_headers/nvmf.o 00:02:18.427 CC test/env/pci/pci_ut.o 00:02:18.427 CXX test/cpp_headers/nvmf_spec.o 00:02:18.427 CXX test/cpp_headers/nvmf_transport.o 00:02:18.427 CXX test/cpp_headers/opal.o 00:02:18.427 CC test/thread/lock/spdk_lock.o 00:02:18.427 CXX test/cpp_headers/opal_spec.o 00:02:18.427 CXX test/cpp_headers/pci_ids.o 00:02:18.427 CXX test/cpp_headers/pipe.o 00:02:18.427 CXX test/cpp_headers/queue.o 00:02:18.427 CXX test/cpp_headers/reduce.o 00:02:18.427 CXX test/cpp_headers/rpc.o 00:02:18.427 CXX test/cpp_headers/scheduler.o 00:02:18.427 CXX test/cpp_headers/scsi.o 00:02:18.427 CXX test/cpp_headers/scsi_spec.o 00:02:18.427 CC app/fio/nvme/fio_plugin.o 00:02:18.427 CXX test/cpp_headers/sock.o 00:02:18.427 CC test/thread/poller_perf/poller_perf.o 00:02:18.427 CXX test/cpp_headers/stdinc.o 00:02:18.427 CXX test/cpp_headers/string.o 00:02:18.427 CC test/app/jsoncat/jsoncat.o 00:02:18.427 CXX test/cpp_headers/thread.o 00:02:18.427 CXX test/cpp_headers/trace.o 00:02:18.427 CXX test/cpp_headers/trace_parser.o 00:02:18.427 CXX test/cpp_headers/tree.o 00:02:18.427 CXX test/cpp_headers/ublk.o 00:02:18.427 CC test/env/memory/memory_ut.o 00:02:18.427 CXX test/cpp_headers/util.o 00:02:18.427 CXX test/cpp_headers/uuid.o 00:02:18.427 LINK spdk_lspci 00:02:18.427 CC examples/util/zipf/zipf.o 00:02:18.427 CC examples/ioat/verify/verify.o 00:02:18.427 CC test/app/histogram_perf/histogram_perf.o 00:02:18.427 CC test/env/vtophys/vtophys.o 00:02:18.427 CC test/app/stub/stub.o 00:02:18.427 CXX test/cpp_headers/version.o 00:02:18.427 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:18.427 CC examples/ioat/perf/perf.o 00:02:18.427 CC test/dma/test_dma/test_dma.o 00:02:18.427 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.427 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.427 CC test/app/bdev_svc/bdev_svc.o 00:02:18.427 CXX test/cpp_headers/vhost.o 00:02:18.427 LINK rpc_client_test 00:02:18.427 CC app/fio/bdev/fio_plugin.o 00:02:18.427 CXX test/cpp_headers/vmd.o 00:02:18.427 LINK spdk_trace_record 00:02:18.687 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.687 LINK spdk_nvme_discover 00:02:18.687 LINK nvmf_tgt 00:02:18.687 CXX test/cpp_headers/xor.o 00:02:18.687 CXX test/cpp_headers/zipf.o 00:02:18.687 LINK interrupt_tgt 00:02:18.687 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.687 LINK jsoncat 00:02:18.687 LINK iscsi_tgt 00:02:18.687 LINK poller_perf 00:02:18.687 LINK histogram_perf 00:02:18.687 LINK vtophys 00:02:18.687 LINK spdk_tgt 00:02:18.687 LINK zipf 00:02:18.687 LINK env_dpdk_post_init 00:02:18.687 LINK verify 00:02:18.687 LINK stub 00:02:18.687 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.687 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:18.687 struct spdk_nvme_fdp_ruhs ruhs; 00:02:18.687 ^ 00:02:18.687 LINK ioat_perf 00:02:18.687 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.687 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.687 LINK bdev_svc 00:02:18.687 LINK spdk_trace 00:02:18.687 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:18.687 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:18.945 LINK spdk_dd 00:02:18.945 LINK test_dma 00:02:18.945 LINK pci_ut 00:02:18.945 LINK spdk_nvme_identify 00:02:18.945 1 warning generated. 00:02:18.945 LINK nvme_fuzz 00:02:18.945 LINK spdk_nvme 00:02:18.945 LINK spdk_bdev 00:02:18.945 LINK mem_callbacks 00:02:18.945 LINK llvm_vfio_fuzz 00:02:18.945 LINK vhost_fuzz 00:02:18.945 LINK spdk_nvme_perf 00:02:18.945 LINK spdk_top 00:02:19.204 CC app/vhost/vhost.o 00:02:19.204 LINK llvm_nvme_fuzz 00:02:19.204 CC examples/sock/hello_world/hello_sock.o 00:02:19.204 CC examples/idxd/perf/perf.o 00:02:19.204 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.204 CC examples/vmd/led/led.o 00:02:19.204 CC examples/thread/thread/thread_ex.o 00:02:19.204 LINK memory_ut 00:02:19.463 LINK vhost 00:02:19.463 LINK lsvmd 00:02:19.463 LINK led 00:02:19.463 LINK hello_sock 00:02:19.463 LINK idxd_perf 00:02:19.463 LINK spdk_lock 00:02:19.463 LINK thread 00:02:19.722 LINK iscsi_fuzz 00:02:19.981 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.981 CC examples/nvme/abort/abort.o 00:02:19.981 CC examples/nvme/hotplug/hotplug.o 00:02:19.981 CC examples/nvme/reconnect/reconnect.o 00:02:19.981 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.981 CC examples/nvme/hello_world/hello_world.o 00:02:19.981 CC examples/nvme/arbitration/arbitration.o 00:02:19.981 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:20.239 CC test/event/event_perf/event_perf.o 00:02:20.239 CC test/event/reactor/reactor.o 00:02:20.239 CC test/event/reactor_perf/reactor_perf.o 00:02:20.239 CC test/event/app_repeat/app_repeat.o 00:02:20.239 CC test/event/scheduler/scheduler.o 00:02:20.239 LINK pmr_persistence 00:02:20.239 LINK hotplug 00:02:20.239 LINK cmb_copy 00:02:20.239 LINK hello_world 00:02:20.239 LINK reactor 00:02:20.239 LINK event_perf 00:02:20.240 LINK reactor_perf 00:02:20.240 LINK reconnect 00:02:20.240 LINK abort 00:02:20.240 LINK app_repeat 00:02:20.240 LINK arbitration 00:02:20.498 LINK nvme_manage 00:02:20.498 LINK scheduler 00:02:20.498 CC test/nvme/reserve/reserve.o 00:02:20.498 CC test/nvme/overhead/overhead.o 00:02:20.498 CC test/nvme/sgl/sgl.o 00:02:20.498 CC test/nvme/e2edp/nvme_dp.o 00:02:20.498 CC test/nvme/aer/aer.o 00:02:20.498 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.498 CC test/nvme/compliance/nvme_compliance.o 00:02:20.498 CC test/nvme/reset/reset.o 00:02:20.498 CC test/nvme/boot_partition/boot_partition.o 00:02:20.498 CC test/nvme/cuse/cuse.o 00:02:20.498 CC test/nvme/fdp/fdp.o 00:02:20.498 CC test/nvme/connect_stress/connect_stress.o 00:02:20.498 CC test/nvme/err_injection/err_injection.o 00:02:20.498 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.498 CC test/nvme/simple_copy/simple_copy.o 00:02:20.498 CC test/nvme/startup/startup.o 00:02:20.498 CC test/blobfs/mkfs/mkfs.o 00:02:20.498 CC test/accel/dif/dif.o 00:02:20.755 CC test/lvol/esnap/esnap.o 00:02:20.755 LINK reserve 00:02:20.755 LINK boot_partition 00:02:20.755 LINK doorbell_aers 00:02:20.755 LINK startup 00:02:20.755 LINK err_injection 00:02:20.755 LINK fused_ordering 00:02:20.755 LINK connect_stress 00:02:20.755 LINK nvme_dp 00:02:20.755 LINK simple_copy 00:02:20.755 LINK aer 00:02:20.755 LINK overhead 00:02:20.755 LINK sgl 00:02:20.755 LINK mkfs 00:02:20.755 LINK reset 00:02:20.755 LINK fdp 00:02:20.755 LINK nvme_compliance 00:02:21.012 LINK dif 00:02:21.271 CC examples/accel/perf/accel_perf.o 00:02:21.271 CC examples/blob/hello_world/hello_blob.o 00:02:21.271 CC examples/blob/cli/blobcli.o 00:02:21.529 LINK cuse 00:02:21.529 LINK hello_blob 00:02:21.529 LINK accel_perf 00:02:21.529 LINK blobcli 00:02:22.463 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.463 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.463 LINK hello_bdev 00:02:22.463 CC test/bdev/bdevio/bdevio.o 00:02:22.722 LINK bdevperf 00:02:22.722 LINK bdevio 00:02:24.101 LINK esnap 00:02:24.360 CC examples/nvmf/nvmf/nvmf.o 00:02:24.360 LINK nvmf 00:02:25.739 00:02:25.739 real 0m45.499s 00:02:25.739 user 5m33.354s 00:02:25.739 sys 2m30.494s 00:02:25.739 20:16:18 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:25.739 20:16:18 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.739 ************************************ 00:02:25.739 END TEST make 00:02:25.739 ************************************ 00:02:25.739 20:16:18 -- common/autotest_common.sh@1142 -- $ return 0 00:02:25.739 20:16:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.739 20:16:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.739 20:16:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.739 20:16:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.739 20:16:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.739 20:16:18 -- pm/common@44 -- $ pid=187905 00:02:25.739 20:16:18 -- pm/common@50 -- $ kill -TERM 187905 00:02:25.739 20:16:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.739 20:16:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.739 20:16:18 -- pm/common@44 -- $ pid=187907 00:02:25.739 20:16:18 -- pm/common@50 -- $ kill -TERM 187907 00:02:25.739 20:16:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.739 20:16:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.739 20:16:18 -- pm/common@44 -- $ pid=187909 00:02:25.739 20:16:18 -- pm/common@50 -- $ kill -TERM 187909 00:02:25.739 20:16:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.739 20:16:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.739 20:16:18 -- pm/common@44 -- $ pid=187935 00:02:25.739 20:16:18 -- pm/common@50 -- $ sudo -E kill -TERM 187935 00:02:25.998 20:16:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.998 20:16:18 -- nvmf/common.sh@7 -- # uname -s 00:02:25.998 20:16:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.998 20:16:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.998 20:16:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.998 20:16:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.998 20:16:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.998 20:16:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.998 20:16:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.998 20:16:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.998 20:16:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.998 20:16:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.998 20:16:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:25.998 20:16:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:25.998 20:16:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.998 20:16:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.998 20:16:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:25.998 20:16:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.999 20:16:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:25.999 20:16:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.999 20:16:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.999 20:16:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.999 20:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.999 20:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.999 20:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.999 20:16:18 -- paths/export.sh@5 -- # export PATH 00:02:25.999 20:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.999 20:16:18 -- nvmf/common.sh@47 -- # : 0 00:02:25.999 20:16:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:25.999 20:16:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:25.999 20:16:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.999 20:16:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.999 20:16:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.999 20:16:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:25.999 20:16:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:25.999 20:16:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:25.999 20:16:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.999 20:16:18 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.999 20:16:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.999 20:16:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.999 20:16:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:25.999 20:16:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.999 20:16:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:25.999 20:16:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.999 20:16:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.999 20:16:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.999 20:16:18 -- spdk/autotest.sh@48 -- # udevadm_pid=250736 00:02:25.999 20:16:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.999 20:16:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.999 20:16:18 -- pm/common@17 -- # local monitor 00:02:25.999 20:16:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.999 20:16:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.999 20:16:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.999 20:16:18 -- pm/common@21 -- # date +%s 00:02:25.999 20:16:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.999 20:16:18 -- pm/common@21 -- # date +%s 00:02:25.999 20:16:18 -- pm/common@25 -- # sleep 1 00:02:25.999 20:16:18 -- pm/common@21 -- # date +%s 00:02:25.999 20:16:18 -- pm/common@21 -- # date +%s 00:02:25.999 20:16:18 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067378 00:02:25.999 20:16:18 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067378 00:02:25.999 20:16:18 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067378 00:02:25.999 20:16:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067378 00:02:25.999 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067378_collect-vmstat.pm.log 00:02:25.999 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067378_collect-cpu-temp.pm.log 00:02:25.999 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067378_collect-cpu-load.pm.log 00:02:25.999 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067378_collect-bmc-pm.bmc.pm.log 00:02:26.935 20:16:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.935 20:16:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.935 20:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:26.935 20:16:19 -- common/autotest_common.sh@10 -- # set +x 00:02:26.935 20:16:19 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.935 20:16:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:26.935 20:16:19 -- common/autotest_common.sh@10 -- # set +x 00:02:26.935 20:16:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:27.195 20:16:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:27.195 20:16:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:27.195 20:16:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:27.195 20:16:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:27.195 20:16:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.195 20:16:19 -- common/autotest_common.sh@1455 -- # uname 00:02:27.195 20:16:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:27.195 20:16:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.195 20:16:19 -- common/autotest_common.sh@1475 -- # uname 00:02:27.195 20:16:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:27.195 20:16:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:27.195 20:16:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:27.195 20:16:19 -- spdk/autotest.sh@72 -- # hash lcov 00:02:27.195 20:16:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:27.195 20:16:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:27.195 20:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:27.195 20:16:19 -- common/autotest_common.sh@10 -- # set +x 00:02:27.195 20:16:19 -- spdk/autotest.sh@91 -- # rm -f 00:02:27.195 20:16:19 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.487 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.487 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.746 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.746 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.746 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.746 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.746 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:30.746 20:16:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:30.746 20:16:23 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:30.746 20:16:23 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:30.746 20:16:23 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:30.746 20:16:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:30.746 20:16:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:30.746 20:16:23 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:30.746 20:16:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:30.746 20:16:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:30.746 20:16:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:30.746 20:16:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:30.746 20:16:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:30.746 20:16:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:30.746 20:16:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:30.746 20:16:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:30.746 No valid GPT data, bailing 00:02:30.746 20:16:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:30.746 20:16:23 -- scripts/common.sh@391 -- # pt= 00:02:30.746 20:16:23 -- scripts/common.sh@392 -- # return 1 00:02:30.746 20:16:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:30.746 1+0 records in 00:02:30.746 1+0 records out 00:02:30.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621868 s, 169 MB/s 00:02:30.746 20:16:23 -- spdk/autotest.sh@118 -- # sync 00:02:30.746 20:16:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:30.746 20:16:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:30.746 20:16:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:38.906 20:16:30 -- spdk/autotest.sh@124 -- # uname -s 00:02:38.906 20:16:30 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:38.906 20:16:30 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.906 20:16:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.906 20:16:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.906 20:16:30 -- common/autotest_common.sh@10 -- # set +x 00:02:38.906 ************************************ 00:02:38.906 START TEST setup.sh 00:02:38.906 ************************************ 00:02:38.906 20:16:30 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:38.906 * Looking for test storage... 00:02:38.906 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:38.906 20:16:30 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:38.906 20:16:30 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:38.906 20:16:30 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:38.906 20:16:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.906 20:16:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.906 20:16:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:38.906 ************************************ 00:02:38.906 START TEST acl 00:02:38.906 ************************************ 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:38.906 * Looking for test storage... 00:02:38.906 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:38.906 20:16:30 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.906 20:16:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:38.906 20:16:30 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:38.906 20:16:30 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:38.906 20:16:30 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:38.906 20:16:30 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:38.906 20:16:30 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:38.906 20:16:30 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.906 20:16:30 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.193 20:16:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:42.193 20:16:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:42.193 20:16:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.193 20:16:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:42.193 20:16:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.193 20:16:34 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:45.559 Hugepages 00:02:45.559 node hugesize free / total 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 00:02:45.559 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:45.559 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:45.560 20:16:37 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:45.560 20:16:37 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.560 20:16:37 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.560 20:16:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:45.560 ************************************ 00:02:45.560 START TEST denied 00:02:45.560 ************************************ 00:02:45.560 20:16:37 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:45.560 20:16:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:45.560 20:16:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:45.560 20:16:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.560 20:16:37 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:45.560 20:16:37 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:48.853 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.853 20:16:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.050 00:02:53.050 real 0m7.695s 00:02:53.050 user 0m2.416s 00:02:53.050 sys 0m4.634s 00:02:53.050 20:16:45 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.050 20:16:45 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:53.050 ************************************ 00:02:53.050 END TEST denied 00:02:53.050 ************************************ 00:02:53.050 20:16:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:53.050 20:16:45 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:53.050 20:16:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.050 20:16:45 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.050 20:16:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.050 ************************************ 00:02:53.050 START TEST allowed 00:02:53.050 ************************************ 00:02:53.050 20:16:45 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:53.050 20:16:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:53.050 20:16:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:53.050 20:16:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.050 20:16:45 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:53.050 20:16:45 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:58.340 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:58.340 20:16:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:58.340 20:16:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:58.340 20:16:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:58.340 20:16:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.341 20:16:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.630 00:03:01.630 real 0m8.368s 00:03:01.630 user 0m2.349s 00:03:01.630 sys 0m4.506s 00:03:01.630 20:16:53 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.630 20:16:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:01.630 ************************************ 00:03:01.630 END TEST allowed 00:03:01.630 ************************************ 00:03:01.630 20:16:53 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:01.630 00:03:01.630 real 0m23.401s 00:03:01.630 user 0m7.404s 00:03:01.630 sys 0m14.121s 00:03:01.630 20:16:53 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.630 20:16:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:01.630 ************************************ 00:03:01.630 END TEST acl 00:03:01.630 ************************************ 00:03:01.630 20:16:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:01.630 20:16:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.631 20:16:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.631 20:16:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.631 20:16:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.631 ************************************ 00:03:01.631 START TEST hugepages 00:03:01.631 ************************************ 00:03:01.631 20:16:53 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.631 * Looking for test storage... 00:03:01.631 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41598656 kB' 'MemAvailable: 43906084 kB' 'Buffers: 11496 kB' 'Cached: 10270196 kB' 'SwapCached: 16 kB' 'Active: 8585540 kB' 'Inactive: 2283636 kB' 'Active(anon): 8110412 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590720 kB' 'Mapped: 178808 kB' 'Shmem: 7601752 kB' 'KReclaimable: 249236 kB' 'Slab: 796636 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 547400 kB' 'KernelStack: 22064 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439068 kB' 'Committed_AS: 9563536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213364 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.631 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:01.632 20:16:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:01.632 20:16:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.632 20:16:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.632 20:16:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.891 ************************************ 00:03:01.891 START TEST default_setup 00:03:01.891 ************************************ 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.891 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.892 20:16:54 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:04.428 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.428 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.342 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43784200 kB' 'MemAvailable: 46091628 kB' 'Buffers: 11496 kB' 'Cached: 10270324 kB' 'SwapCached: 16 kB' 'Active: 8603236 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128108 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608468 kB' 'Mapped: 179228 kB' 'Shmem: 7601880 kB' 'KReclaimable: 249236 kB' 'Slab: 794080 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 544844 kB' 'KernelStack: 22176 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9582028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.342 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.343 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43791252 kB' 'MemAvailable: 46098680 kB' 'Buffers: 11496 kB' 'Cached: 10270328 kB' 'SwapCached: 16 kB' 'Active: 8603396 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128268 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608632 kB' 'Mapped: 179184 kB' 'Shmem: 7601884 kB' 'KReclaimable: 249236 kB' 'Slab: 794080 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 544844 kB' 'KernelStack: 22176 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9583536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.344 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.345 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.346 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43794500 kB' 'MemAvailable: 46101928 kB' 'Buffers: 11496 kB' 'Cached: 10270348 kB' 'SwapCached: 16 kB' 'Active: 8602428 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127300 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607568 kB' 'Mapped: 179108 kB' 'Shmem: 7601904 kB' 'KReclaimable: 249236 kB' 'Slab: 794036 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 544800 kB' 'KernelStack: 22112 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9582064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.347 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.348 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.349 nr_hugepages=1024 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.349 resv_hugepages=0 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.349 surplus_hugepages=0 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.349 anon_hugepages=0 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43792836 kB' 'MemAvailable: 46100264 kB' 'Buffers: 11496 kB' 'Cached: 10270364 kB' 'SwapCached: 16 kB' 'Active: 8602948 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127820 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608052 kB' 'Mapped: 179116 kB' 'Shmem: 7601920 kB' 'KReclaimable: 249236 kB' 'Slab: 794036 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 544800 kB' 'KernelStack: 22224 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9583580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.349 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.350 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25811752 kB' 'MemUsed: 6780332 kB' 'SwapCached: 16 kB' 'Active: 3015260 kB' 'Inactive: 180800 kB' 'Active(anon): 2798640 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2969904 kB' 'Mapped: 118100 kB' 'AnonPages: 229408 kB' 'Shmem: 2572484 kB' 'KernelStack: 12648 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134904 kB' 'Slab: 390084 kB' 'SReclaimable: 134904 kB' 'SUnreclaim: 255180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.351 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.352 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.353 node0=1024 expecting 1024 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.353 00:03:06.353 real 0m4.484s 00:03:06.353 user 0m0.935s 00:03:06.353 sys 0m1.975s 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.353 20:16:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:06.353 ************************************ 00:03:06.353 END TEST default_setup 00:03:06.353 ************************************ 00:03:06.353 20:16:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:06.353 20:16:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:06.353 20:16:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.353 20:16:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.353 20:16:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.353 ************************************ 00:03:06.353 START TEST per_node_1G_alloc 00:03:06.353 ************************************ 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.353 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.354 20:16:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:09.648 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.648 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43846280 kB' 'MemAvailable: 46153708 kB' 'Buffers: 11496 kB' 'Cached: 10270472 kB' 'SwapCached: 16 kB' 'Active: 8604840 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129712 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609256 kB' 'Mapped: 179196 kB' 'Shmem: 7602028 kB' 'KReclaimable: 249236 kB' 'Slab: 794544 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545308 kB' 'KernelStack: 22304 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9584184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213716 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.648 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.649 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43848976 kB' 'MemAvailable: 46156404 kB' 'Buffers: 11496 kB' 'Cached: 10270476 kB' 'SwapCached: 16 kB' 'Active: 8604312 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129184 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609256 kB' 'Mapped: 179116 kB' 'Shmem: 7602032 kB' 'KReclaimable: 249236 kB' 'Slab: 794540 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545304 kB' 'KernelStack: 22112 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9582708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213700 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.650 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.651 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43849252 kB' 'MemAvailable: 46156680 kB' 'Buffers: 11496 kB' 'Cached: 10270496 kB' 'SwapCached: 16 kB' 'Active: 8604144 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129016 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609116 kB' 'Mapped: 179116 kB' 'Shmem: 7602052 kB' 'KReclaimable: 249236 kB' 'Slab: 794572 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545336 kB' 'KernelStack: 22208 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9584224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213652 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.652 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.653 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.654 nr_hugepages=1024 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.654 resv_hugepages=0 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.654 surplus_hugepages=0 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.654 anon_hugepages=0 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43855340 kB' 'MemAvailable: 46162768 kB' 'Buffers: 11496 kB' 'Cached: 10270516 kB' 'SwapCached: 16 kB' 'Active: 8601888 kB' 'Inactive: 2283636 kB' 'Active(anon): 8126760 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606768 kB' 'Mapped: 177820 kB' 'Shmem: 7602072 kB' 'KReclaimable: 249236 kB' 'Slab: 794548 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545312 kB' 'KernelStack: 22160 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9573228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.654 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.655 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26905492 kB' 'MemUsed: 5686592 kB' 'SwapCached: 16 kB' 'Active: 3014808 kB' 'Inactive: 180800 kB' 'Active(anon): 2798188 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2970048 kB' 'Mapped: 117820 kB' 'AnonPages: 228768 kB' 'Shmem: 2572628 kB' 'KernelStack: 12680 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134904 kB' 'Slab: 390292 kB' 'SReclaimable: 134904 kB' 'SUnreclaim: 255388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.656 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.657 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 16950580 kB' 'MemUsed: 10752568 kB' 'SwapCached: 0 kB' 'Active: 5587052 kB' 'Inactive: 2102836 kB' 'Active(anon): 5328544 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7312004 kB' 'Mapped: 60000 kB' 'AnonPages: 377928 kB' 'Shmem: 5029468 kB' 'KernelStack: 9464 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114332 kB' 'Slab: 404256 kB' 'SReclaimable: 114332 kB' 'SUnreclaim: 289924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.658 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.659 node0=512 expecting 512 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:09.659 node1=512 expecting 512 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:09.659 00:03:09.659 real 0m3.182s 00:03:09.659 user 0m1.115s 00:03:09.659 sys 0m1.980s 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.659 20:17:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.659 ************************************ 00:03:09.659 END TEST per_node_1G_alloc 00:03:09.659 ************************************ 00:03:09.659 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:09.659 20:17:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:09.659 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.659 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.659 20:17:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.659 ************************************ 00:03:09.659 START TEST even_2G_alloc 00:03:09.659 ************************************ 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.659 20:17:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:12.194 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.194 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.194 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.194 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.194 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.195 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.458 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43869776 kB' 'MemAvailable: 46177204 kB' 'Buffers: 11496 kB' 'Cached: 10270640 kB' 'SwapCached: 16 kB' 'Active: 8601832 kB' 'Inactive: 2283636 kB' 'Active(anon): 8126704 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606524 kB' 'Mapped: 177888 kB' 'Shmem: 7602196 kB' 'KReclaimable: 249236 kB' 'Slab: 794732 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545496 kB' 'KernelStack: 22080 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9572528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213748 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.459 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43868872 kB' 'MemAvailable: 46176300 kB' 'Buffers: 11496 kB' 'Cached: 10270640 kB' 'SwapCached: 16 kB' 'Active: 8602860 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127732 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607604 kB' 'Mapped: 177832 kB' 'Shmem: 7602196 kB' 'KReclaimable: 249236 kB' 'Slab: 795148 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545912 kB' 'KernelStack: 22176 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9573800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213732 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.460 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.461 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43868024 kB' 'MemAvailable: 46175452 kB' 'Buffers: 11496 kB' 'Cached: 10270664 kB' 'SwapCached: 16 kB' 'Active: 8602464 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127336 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607148 kB' 'Mapped: 177832 kB' 'Shmem: 7602220 kB' 'KReclaimable: 249236 kB' 'Slab: 795112 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545876 kB' 'KernelStack: 22128 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9574060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213748 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.462 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.463 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.464 nr_hugepages=1024 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.464 resv_hugepages=0 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.464 surplus_hugepages=0 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.464 anon_hugepages=0 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43867460 kB' 'MemAvailable: 46174888 kB' 'Buffers: 11496 kB' 'Cached: 10270680 kB' 'SwapCached: 16 kB' 'Active: 8602560 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127432 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607148 kB' 'Mapped: 177832 kB' 'Shmem: 7602236 kB' 'KReclaimable: 249236 kB' 'Slab: 795112 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545876 kB' 'KernelStack: 22176 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9574080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213764 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.464 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.465 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26901404 kB' 'MemUsed: 5690680 kB' 'SwapCached: 16 kB' 'Active: 3014388 kB' 'Inactive: 180800 kB' 'Active(anon): 2797768 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2970192 kB' 'Mapped: 117832 kB' 'AnonPages: 228156 kB' 'Shmem: 2572772 kB' 'KernelStack: 12680 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134904 kB' 'Slab: 390844 kB' 'SReclaimable: 134904 kB' 'SUnreclaim: 255940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.466 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.727 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 16964804 kB' 'MemUsed: 10738344 kB' 'SwapCached: 0 kB' 'Active: 5588084 kB' 'Inactive: 2102836 kB' 'Active(anon): 5329576 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7312028 kB' 'Mapped: 60000 kB' 'AnonPages: 378896 kB' 'Shmem: 5029492 kB' 'KernelStack: 9480 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114332 kB' 'Slab: 404268 kB' 'SReclaimable: 114332 kB' 'SUnreclaim: 289936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.728 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.729 node0=512 expecting 512 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:12.729 node1=512 expecting 512 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.729 00:03:12.729 real 0m3.061s 00:03:12.729 user 0m1.095s 00:03:12.729 sys 0m1.974s 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.729 20:17:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:12.729 ************************************ 00:03:12.729 END TEST even_2G_alloc 00:03:12.729 ************************************ 00:03:12.729 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:12.729 20:17:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:12.729 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.729 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.729 20:17:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.729 ************************************ 00:03:12.729 START TEST odd_alloc 00:03:12.729 ************************************ 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:12.729 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.730 20:17:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:16.023 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.023 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43922896 kB' 'MemAvailable: 46230324 kB' 'Buffers: 11496 kB' 'Cached: 10270800 kB' 'SwapCached: 16 kB' 'Active: 8603460 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128332 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607556 kB' 'Mapped: 177996 kB' 'Shmem: 7602356 kB' 'KReclaimable: 249236 kB' 'Slab: 794548 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545312 kB' 'KernelStack: 22080 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9572088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.023 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43923160 kB' 'MemAvailable: 46230588 kB' 'Buffers: 11496 kB' 'Cached: 10270808 kB' 'SwapCached: 16 kB' 'Active: 8602676 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127548 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607268 kB' 'Mapped: 177844 kB' 'Shmem: 7602364 kB' 'KReclaimable: 249236 kB' 'Slab: 794528 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545292 kB' 'KernelStack: 22048 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9572104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.024 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.025 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43923980 kB' 'MemAvailable: 46231408 kB' 'Buffers: 11496 kB' 'Cached: 10270820 kB' 'SwapCached: 16 kB' 'Active: 8602688 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127560 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607264 kB' 'Mapped: 177844 kB' 'Shmem: 7602376 kB' 'KReclaimable: 249236 kB' 'Slab: 794528 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545292 kB' 'KernelStack: 22048 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9572124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.026 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.027 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:16.028 nr_hugepages=1025 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.028 resv_hugepages=0 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.028 surplus_hugepages=0 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.028 anon_hugepages=0 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43924324 kB' 'MemAvailable: 46231752 kB' 'Buffers: 11496 kB' 'Cached: 10270844 kB' 'SwapCached: 16 kB' 'Active: 8602708 kB' 'Inactive: 2283636 kB' 'Active(anon): 8127580 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607264 kB' 'Mapped: 177844 kB' 'Shmem: 7602400 kB' 'KReclaimable: 249236 kB' 'Slab: 794528 kB' 'SReclaimable: 249236 kB' 'SUnreclaim: 545292 kB' 'KernelStack: 22048 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 9572144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.028 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.029 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26913968 kB' 'MemUsed: 5678116 kB' 'SwapCached: 16 kB' 'Active: 3014824 kB' 'Inactive: 180800 kB' 'Active(anon): 2798204 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2970304 kB' 'Mapped: 117824 kB' 'AnonPages: 228468 kB' 'Shmem: 2572884 kB' 'KernelStack: 12696 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134904 kB' 'Slab: 390652 kB' 'SReclaimable: 134904 kB' 'SUnreclaim: 255748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.030 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 17010356 kB' 'MemUsed: 10692792 kB' 'SwapCached: 0 kB' 'Active: 5588324 kB' 'Inactive: 2102836 kB' 'Active(anon): 5329816 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7312068 kB' 'Mapped: 60020 kB' 'AnonPages: 379168 kB' 'Shmem: 5029532 kB' 'KernelStack: 9368 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114332 kB' 'Slab: 403876 kB' 'SReclaimable: 114332 kB' 'SUnreclaim: 289544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.031 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:16.032 node0=512 expecting 513 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:16.032 node1=513 expecting 512 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:16.032 00:03:16.032 real 0m3.328s 00:03:16.032 user 0m1.251s 00:03:16.032 sys 0m2.051s 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.032 20:17:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.032 ************************************ 00:03:16.033 END TEST odd_alloc 00:03:16.033 ************************************ 00:03:16.033 20:17:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.033 20:17:08 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:16.033 20:17:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.033 20:17:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.033 20:17:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.033 ************************************ 00:03:16.033 START TEST custom_alloc 00:03:16.033 ************************************ 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.033 20:17:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:19.423 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.423 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42900444 kB' 'MemAvailable: 45207856 kB' 'Buffers: 11496 kB' 'Cached: 10270964 kB' 'SwapCached: 16 kB' 'Active: 8609608 kB' 'Inactive: 2283636 kB' 'Active(anon): 8134480 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613528 kB' 'Mapped: 178484 kB' 'Shmem: 7602520 kB' 'KReclaimable: 249204 kB' 'Slab: 794068 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544864 kB' 'KernelStack: 22032 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9578888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213576 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.423 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.424 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42900400 kB' 'MemAvailable: 45207812 kB' 'Buffers: 11496 kB' 'Cached: 10270968 kB' 'SwapCached: 16 kB' 'Active: 8604704 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129576 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608628 kB' 'Mapped: 177980 kB' 'Shmem: 7602524 kB' 'KReclaimable: 249204 kB' 'Slab: 794068 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544864 kB' 'KernelStack: 22032 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9572788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.425 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42897236 kB' 'MemAvailable: 45204648 kB' 'Buffers: 11496 kB' 'Cached: 10270988 kB' 'SwapCached: 16 kB' 'Active: 8606188 kB' 'Inactive: 2283636 kB' 'Active(anon): 8131060 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610540 kB' 'Mapped: 178344 kB' 'Shmem: 7602544 kB' 'KReclaimable: 249204 kB' 'Slab: 794068 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544864 kB' 'KernelStack: 22000 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9576668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213540 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.426 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.427 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:19.428 nr_hugepages=1536 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.428 resv_hugepages=0 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.428 surplus_hugepages=0 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.428 anon_hugepages=0 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42893204 kB' 'MemAvailable: 45200616 kB' 'Buffers: 11496 kB' 'Cached: 10270988 kB' 'SwapCached: 16 kB' 'Active: 8608896 kB' 'Inactive: 2283636 kB' 'Active(anon): 8133768 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613252 kB' 'Mapped: 178344 kB' 'Shmem: 7602544 kB' 'KReclaimable: 249204 kB' 'Slab: 794068 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544864 kB' 'KernelStack: 22032 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 9578948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213544 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.428 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26934160 kB' 'MemUsed: 5657924 kB' 'SwapCached: 16 kB' 'Active: 3013736 kB' 'Inactive: 180800 kB' 'Active(anon): 2797116 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2970432 kB' 'Mapped: 117840 kB' 'AnonPages: 227252 kB' 'Shmem: 2573012 kB' 'KernelStack: 12648 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134872 kB' 'Slab: 390148 kB' 'SReclaimable: 134872 kB' 'SUnreclaim: 255276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.429 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.430 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15964952 kB' 'MemUsed: 11738196 kB' 'SwapCached: 0 kB' 'Active: 5589540 kB' 'Inactive: 2102836 kB' 'Active(anon): 5331032 kB' 'Inactive(anon): 78808 kB' 'Active(file): 258508 kB' 'Inactive(file): 2024028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7312108 kB' 'Mapped: 60000 kB' 'AnonPages: 380432 kB' 'Shmem: 5029572 kB' 'KernelStack: 9384 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114332 kB' 'Slab: 403920 kB' 'SReclaimable: 114332 kB' 'SUnreclaim: 289588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.431 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.432 node0=512 expecting 512 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:19.432 node1=1024 expecting 1024 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:19.432 00:03:19.432 real 0m3.133s 00:03:19.432 user 0m1.099s 00:03:19.432 sys 0m2.045s 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.432 20:17:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.432 ************************************ 00:03:19.432 END TEST custom_alloc 00:03:19.432 ************************************ 00:03:19.432 20:17:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.432 20:17:11 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:19.432 20:17:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.432 20:17:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.432 20:17:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.432 ************************************ 00:03:19.432 START TEST no_shrink_alloc 00:03:19.432 ************************************ 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.432 20:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:22.727 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.727 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.728 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43918736 kB' 'MemAvailable: 46226148 kB' 'Buffers: 11496 kB' 'Cached: 10271124 kB' 'SwapCached: 16 kB' 'Active: 8604236 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129108 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608040 kB' 'Mapped: 177992 kB' 'Shmem: 7602680 kB' 'KReclaimable: 249204 kB' 'Slab: 794300 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 545096 kB' 'KernelStack: 22048 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9573600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.728 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43919320 kB' 'MemAvailable: 46226732 kB' 'Buffers: 11496 kB' 'Cached: 10271128 kB' 'SwapCached: 16 kB' 'Active: 8604004 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128876 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608300 kB' 'Mapped: 177852 kB' 'Shmem: 7602684 kB' 'KReclaimable: 249204 kB' 'Slab: 794288 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 545084 kB' 'KernelStack: 22048 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9573620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.729 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.730 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43920012 kB' 'MemAvailable: 46227424 kB' 'Buffers: 11496 kB' 'Cached: 10271144 kB' 'SwapCached: 16 kB' 'Active: 8604056 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128928 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608340 kB' 'Mapped: 177852 kB' 'Shmem: 7602700 kB' 'KReclaimable: 249204 kB' 'Slab: 794280 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 545076 kB' 'KernelStack: 22048 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9573640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.731 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.732 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.733 nr_hugepages=1024 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.733 resv_hugepages=0 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.733 surplus_hugepages=0 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.733 anon_hugepages=0 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43920604 kB' 'MemAvailable: 46228016 kB' 'Buffers: 11496 kB' 'Cached: 10271168 kB' 'SwapCached: 16 kB' 'Active: 8604076 kB' 'Inactive: 2283636 kB' 'Active(anon): 8128948 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608296 kB' 'Mapped: 177852 kB' 'Shmem: 7602724 kB' 'KReclaimable: 249204 kB' 'Slab: 794280 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 545076 kB' 'KernelStack: 22048 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9573664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.733 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.734 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25884192 kB' 'MemUsed: 6707892 kB' 'SwapCached: 16 kB' 'Active: 3014536 kB' 'Inactive: 180800 kB' 'Active(anon): 2797916 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2970536 kB' 'Mapped: 117852 kB' 'AnonPages: 228516 kB' 'Shmem: 2573116 kB' 'KernelStack: 12712 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134872 kB' 'Slab: 390340 kB' 'SReclaimable: 134872 kB' 'SUnreclaim: 255468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.735 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.736 node0=1024 expecting 1024 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.736 20:17:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.035 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.035 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.035 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.035 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43936916 kB' 'MemAvailable: 46244328 kB' 'Buffers: 11496 kB' 'Cached: 10271268 kB' 'SwapCached: 16 kB' 'Active: 8605528 kB' 'Inactive: 2283636 kB' 'Active(anon): 8130400 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609164 kB' 'Mapped: 177948 kB' 'Shmem: 7602824 kB' 'KReclaimable: 249204 kB' 'Slab: 793764 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544560 kB' 'KernelStack: 22080 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9574460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213652 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.036 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43939864 kB' 'MemAvailable: 46247276 kB' 'Buffers: 11496 kB' 'Cached: 10271272 kB' 'SwapCached: 16 kB' 'Active: 8604940 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129812 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609072 kB' 'Mapped: 177856 kB' 'Shmem: 7602828 kB' 'KReclaimable: 249204 kB' 'Slab: 793668 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544464 kB' 'KernelStack: 22048 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9577092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.037 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.038 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43940948 kB' 'MemAvailable: 46248360 kB' 'Buffers: 11496 kB' 'Cached: 10271288 kB' 'SwapCached: 16 kB' 'Active: 8604992 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129864 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609160 kB' 'Mapped: 177876 kB' 'Shmem: 7602844 kB' 'KReclaimable: 249204 kB' 'Slab: 793668 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544464 kB' 'KernelStack: 22080 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9577116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.039 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.040 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.041 nr_hugepages=1024 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.041 resv_hugepages=0 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.041 surplus_hugepages=0 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.041 anon_hugepages=0 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 43941328 kB' 'MemAvailable: 46248740 kB' 'Buffers: 11496 kB' 'Cached: 10271288 kB' 'SwapCached: 16 kB' 'Active: 8604848 kB' 'Inactive: 2283636 kB' 'Active(anon): 8129720 kB' 'Inactive(anon): 78824 kB' 'Active(file): 475128 kB' 'Inactive(file): 2204812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387580 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609016 kB' 'Mapped: 177876 kB' 'Shmem: 7602844 kB' 'KReclaimable: 249204 kB' 'Slab: 793652 kB' 'SReclaimable: 249204 kB' 'SUnreclaim: 544448 kB' 'KernelStack: 22144 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 9575648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213588 kB' 'VmallocChunk: 0 kB' 'Percpu: 82880 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.041 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.042 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25899256 kB' 'MemUsed: 6692828 kB' 'SwapCached: 16 kB' 'Active: 3015700 kB' 'Inactive: 180800 kB' 'Active(anon): 2799080 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2970660 kB' 'Mapped: 117876 kB' 'AnonPages: 229176 kB' 'Shmem: 2573240 kB' 'KernelStack: 12760 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134872 kB' 'Slab: 389824 kB' 'SReclaimable: 134872 kB' 'SUnreclaim: 254952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.043 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.044 node0=1024 expecting 1024 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.044 00:03:26.044 real 0m6.729s 00:03:26.044 user 0m2.492s 00:03:26.044 sys 0m4.304s 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.044 20:17:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.044 ************************************ 00:03:26.044 END TEST no_shrink_alloc 00:03:26.044 ************************************ 00:03:26.044 20:17:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.044 20:17:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.044 00:03:26.044 real 0m24.488s 00:03:26.044 user 0m8.209s 00:03:26.044 sys 0m14.722s 00:03:26.044 20:17:18 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.044 20:17:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.044 ************************************ 00:03:26.044 END TEST hugepages 00:03:26.044 ************************************ 00:03:26.044 20:17:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:26.044 20:17:18 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:26.044 20:17:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.044 20:17:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.044 20:17:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.304 ************************************ 00:03:26.304 START TEST driver 00:03:26.304 ************************************ 00:03:26.304 20:17:18 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:26.304 * Looking for test storage... 00:03:26.304 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:26.304 20:17:18 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:26.304 20:17:18 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.304 20:17:18 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.582 20:17:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:31.582 20:17:23 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.582 20:17:23 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.582 20:17:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:31.582 ************************************ 00:03:31.582 START TEST guess_driver 00:03:31.582 ************************************ 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:31.582 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:31.582 Looking for driver=vfio-pci 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.582 20:17:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.116 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.374 20:17:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.751 20:17:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.751 20:17:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.751 20:17:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.010 20:17:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.010 20:17:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:36.010 20:17:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.010 20:17:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.287 00:03:41.287 real 0m9.658s 00:03:41.287 user 0m2.563s 00:03:41.287 sys 0m4.846s 00:03:41.287 20:17:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.287 20:17:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.287 ************************************ 00:03:41.287 END TEST guess_driver 00:03:41.287 ************************************ 00:03:41.287 20:17:32 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:41.287 00:03:41.287 real 0m14.529s 00:03:41.287 user 0m3.939s 00:03:41.287 sys 0m7.583s 00:03:41.287 20:17:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.287 20:17:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.287 ************************************ 00:03:41.287 END TEST driver 00:03:41.287 ************************************ 00:03:41.287 20:17:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:41.287 20:17:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:41.287 20:17:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.287 20:17:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.287 20:17:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.287 ************************************ 00:03:41.287 START TEST devices 00:03:41.287 ************************************ 00:03:41.287 20:17:33 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:41.287 * Looking for test storage... 00:03:41.287 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:41.287 20:17:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.287 20:17:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:41.287 20:17:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.287 20:17:33 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:44.576 20:17:36 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:44.576 No valid GPT data, bailing 00:03:44.576 20:17:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:44.576 20:17:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:44.576 20:17:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:44.576 20:17:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.576 20:17:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.576 ************************************ 00:03:44.576 START TEST nvme_mount 00:03:44.576 ************************************ 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:44.576 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.577 20:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:45.515 Creating new GPT entries in memory. 00:03:45.515 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.515 other utilities. 00:03:45.515 20:17:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.515 20:17:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.515 20:17:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.515 20:17:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.515 20:17:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.452 Creating new GPT entries in memory. 00:03:46.452 The operation has completed successfully. 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 280347 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:46.452 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.712 20:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:49.998 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.999 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.999 20:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:49.999 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:49.999 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:49.999 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:49.999 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.999 20:17:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.366 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.367 20:17:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.658 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.658 00:03:56.658 real 0m11.955s 00:03:56.658 user 0m3.453s 00:03:56.658 sys 0m6.425s 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.658 20:17:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:56.658 ************************************ 00:03:56.658 END TEST nvme_mount 00:03:56.658 ************************************ 00:03:56.658 20:17:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:56.658 20:17:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:56.658 20:17:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.658 20:17:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.658 20:17:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.658 ************************************ 00:03:56.658 START TEST dm_mount 00:03:56.658 ************************************ 00:03:56.658 20:17:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:56.658 20:17:48 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:56.658 20:17:48 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:56.658 20:17:48 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:56.658 20:17:48 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:56.658 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.659 20:17:48 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:57.596 Creating new GPT entries in memory. 00:03:57.596 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.596 other utilities. 00:03:57.596 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.596 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.596 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.597 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.597 20:17:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.535 Creating new GPT entries in memory. 00:03:58.535 The operation has completed successfully. 00:03:58.535 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.535 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.535 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.535 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.535 20:17:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:59.474 The operation has completed successfully. 00:03:59.474 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 284754 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:59.734 20:17:51 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.734 20:17:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:03.020 20:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.020 20:17:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.326 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.326 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.327 20:17:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.327 00:04:06.327 real 0m9.481s 00:04:06.327 user 0m2.315s 00:04:06.327 sys 0m4.223s 00:04:06.327 20:17:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.327 20:17:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.327 ************************************ 00:04:06.327 END TEST dm_mount 00:04:06.327 ************************************ 00:04:06.327 20:17:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.327 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.327 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.327 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.327 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.327 20:17:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.327 00:04:06.327 real 0m25.567s 00:04:06.327 user 0m7.175s 00:04:06.327 sys 0m13.285s 00:04:06.327 20:17:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.327 20:17:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.327 ************************************ 00:04:06.327 END TEST devices 00:04:06.327 ************************************ 00:04:06.327 20:17:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.327 00:04:06.327 real 1m28.426s 00:04:06.327 user 0m26.883s 00:04:06.327 sys 0m50.028s 00:04:06.327 20:17:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.327 20:17:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.327 ************************************ 00:04:06.327 END TEST setup.sh 00:04:06.327 ************************************ 00:04:06.327 20:17:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.327 20:17:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:09.609 Hugepages 00:04:09.609 node hugesize free / total 00:04:09.609 node0 1048576kB 0 / 0 00:04:09.609 node0 2048kB 2048 / 2048 00:04:09.609 node1 1048576kB 0 / 0 00:04:09.609 node1 2048kB 0 / 0 00:04:09.609 00:04:09.609 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.609 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:09.609 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:09.609 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:09.609 20:18:01 -- spdk/autotest.sh@130 -- # uname -s 00:04:09.609 20:18:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:09.609 20:18:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:09.609 20:18:01 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:12.901 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.901 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.160 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.160 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.160 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.160 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.160 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.538 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.797 20:18:07 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:15.736 20:18:08 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:15.736 20:18:08 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:15.736 20:18:08 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.736 20:18:08 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:15.736 20:18:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:15.736 20:18:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:15.736 20:18:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.736 20:18:08 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:15.736 20:18:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:15.994 20:18:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:15.994 20:18:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:15.994 20:18:08 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.303 Waiting for block devices as requested 00:04:19.303 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.303 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.303 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.303 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.303 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.562 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.562 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.562 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.820 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.820 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.820 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.820 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.079 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.079 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.079 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.338 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.338 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:20.597 20:18:12 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:20.597 20:18:12 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:20.597 20:18:12 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:20.597 20:18:12 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:20.597 20:18:12 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:20.597 20:18:12 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:20.597 20:18:12 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:20.597 20:18:12 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:20.597 20:18:12 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:20.597 20:18:12 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:20.597 20:18:12 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:20.597 20:18:12 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:20.597 20:18:12 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:20.597 20:18:12 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:20.597 20:18:12 -- common/autotest_common.sh@1557 -- # continue 00:04:20.597 20:18:12 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:20.597 20:18:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:20.597 20:18:12 -- common/autotest_common.sh@10 -- # set +x 00:04:20.597 20:18:12 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:20.597 20:18:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.597 20:18:12 -- common/autotest_common.sh@10 -- # set +x 00:04:20.597 20:18:12 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:23.880 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:23.880 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:23.880 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:23.880 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:23.880 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:23.880 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:23.880 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.138 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.512 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.770 20:18:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:25.770 20:18:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.770 20:18:17 -- common/autotest_common.sh@10 -- # set +x 00:04:25.770 20:18:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:25.770 20:18:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:25.770 20:18:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.770 20:18:17 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:25.770 20:18:17 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:25.770 20:18:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:25.770 20:18:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:25.770 20:18:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:25.770 20:18:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.770 20:18:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.770 20:18:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:25.770 20:18:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:25.770 20:18:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:25.770 20:18:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:25.770 20:18:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:25.770 20:18:18 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:25.770 20:18:18 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:25.770 20:18:18 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:25.770 20:18:18 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:04:25.770 20:18:18 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:04:25.770 20:18:18 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=294844 00:04:25.770 20:18:18 -- common/autotest_common.sh@1598 -- # waitforlisten 294844 00:04:25.770 20:18:18 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.770 20:18:18 -- common/autotest_common.sh@829 -- # '[' -z 294844 ']' 00:04:25.770 20:18:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.770 20:18:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.770 20:18:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.770 20:18:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.770 20:18:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.770 [2024-07-15 20:18:18.121747] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:25.770 [2024-07-15 20:18:18.121813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294844 ] 00:04:26.028 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.028 [2024-07-15 20:18:18.190268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.028 [2024-07-15 20:18:18.261762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.684 20:18:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.684 20:18:18 -- common/autotest_common.sh@862 -- # return 0 00:04:26.684 20:18:18 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:26.684 20:18:18 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:26.684 20:18:18 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:29.973 nvme0n1 00:04:29.973 20:18:21 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:29.973 [2024-07-15 20:18:22.091253] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:29.973 request: 00:04:29.973 { 00:04:29.973 "nvme_ctrlr_name": "nvme0", 00:04:29.973 "password": "test", 00:04:29.973 "method": "bdev_nvme_opal_revert", 00:04:29.973 "req_id": 1 00:04:29.973 } 00:04:29.973 Got JSON-RPC error response 00:04:29.973 response: 00:04:29.973 { 00:04:29.973 "code": -32602, 00:04:29.973 "message": "Invalid parameters" 00:04:29.973 } 00:04:29.973 20:18:22 -- common/autotest_common.sh@1604 -- # true 00:04:29.973 20:18:22 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:29.973 20:18:22 -- common/autotest_common.sh@1608 -- # killprocess 294844 00:04:29.973 20:18:22 -- common/autotest_common.sh@948 -- # '[' -z 294844 ']' 00:04:29.973 20:18:22 -- common/autotest_common.sh@952 -- # kill -0 294844 00:04:29.973 20:18:22 -- common/autotest_common.sh@953 -- # uname 00:04:29.973 20:18:22 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.973 20:18:22 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 294844 00:04:29.973 20:18:22 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.973 20:18:22 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.973 20:18:22 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 294844' 00:04:29.973 killing process with pid 294844 00:04:29.973 20:18:22 -- common/autotest_common.sh@967 -- # kill 294844 00:04:29.973 20:18:22 -- common/autotest_common.sh@972 -- # wait 294844 00:04:32.508 20:18:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:32.508 20:18:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:32.508 20:18:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.508 20:18:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.508 20:18:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:32.508 20:18:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.508 20:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 20:18:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:32.508 20:18:24 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:32.508 20:18:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.508 20:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.508 20:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 ************************************ 00:04:32.508 START TEST env 00:04:32.508 ************************************ 00:04:32.508 20:18:24 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:32.508 * Looking for test storage... 00:04:32.508 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:32.508 20:18:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.508 20:18:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.508 20:18:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.508 20:18:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 ************************************ 00:04:32.508 START TEST env_memory 00:04:32.508 ************************************ 00:04:32.508 20:18:24 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.508 00:04:32.508 00:04:32.508 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.508 http://cunit.sourceforge.net/ 00:04:32.508 00:04:32.508 00:04:32.508 Suite: memory 00:04:32.508 Test: alloc and free memory map ...[2024-07-15 20:18:24.558975] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.508 passed 00:04:32.508 Test: mem map translation ...[2024-07-15 20:18:24.572087] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.508 [2024-07-15 20:18:24.572102] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.508 [2024-07-15 20:18:24.572132] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.508 [2024-07-15 20:18:24.572141] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.508 passed 00:04:32.508 Test: mem map registration ...[2024-07-15 20:18:24.593493] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:32.508 [2024-07-15 20:18:24.593508] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:32.508 passed 00:04:32.509 Test: mem map adjacent registrations ...passed 00:04:32.509 00:04:32.509 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.509 suites 1 1 n/a 0 0 00:04:32.509 tests 4 4 4 0 0 00:04:32.509 asserts 152 152 152 0 n/a 00:04:32.509 00:04:32.509 Elapsed time = 0.088 seconds 00:04:32.509 00:04:32.509 real 0m0.101s 00:04:32.509 user 0m0.091s 00:04:32.509 sys 0m0.010s 00:04:32.509 20:18:24 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.509 20:18:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.509 ************************************ 00:04:32.509 END TEST env_memory 00:04:32.509 ************************************ 00:04:32.509 20:18:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.509 20:18:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.509 20:18:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.509 20:18:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.509 20:18:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.509 ************************************ 00:04:32.509 START TEST env_vtophys 00:04:32.509 ************************************ 00:04:32.509 20:18:24 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.509 EAL: lib.eal log level changed from notice to debug 00:04:32.509 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.509 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.509 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.509 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.509 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.509 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.509 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.509 EAL: Detected lcore 7 as core 8 on socket 0 00:04:32.509 EAL: Detected lcore 8 as core 9 on socket 0 00:04:32.509 EAL: Detected lcore 9 as core 10 on socket 0 00:04:32.509 EAL: Detected lcore 10 as core 11 on socket 0 00:04:32.509 EAL: Detected lcore 11 as core 12 on socket 0 00:04:32.509 EAL: Detected lcore 12 as core 13 on socket 0 00:04:32.509 EAL: Detected lcore 13 as core 14 on socket 0 00:04:32.509 EAL: Detected lcore 14 as core 16 on socket 0 00:04:32.509 EAL: Detected lcore 15 as core 17 on socket 0 00:04:32.509 EAL: Detected lcore 16 as core 18 on socket 0 00:04:32.509 EAL: Detected lcore 17 as core 19 on socket 0 00:04:32.509 EAL: Detected lcore 18 as core 20 on socket 0 00:04:32.509 EAL: Detected lcore 19 as core 21 on socket 0 00:04:32.509 EAL: Detected lcore 20 as core 22 on socket 0 00:04:32.509 EAL: Detected lcore 21 as core 24 on socket 0 00:04:32.509 EAL: Detected lcore 22 as core 25 on socket 0 00:04:32.509 EAL: Detected lcore 23 as core 26 on socket 0 00:04:32.509 EAL: Detected lcore 24 as core 27 on socket 0 00:04:32.509 EAL: Detected lcore 25 as core 28 on socket 0 00:04:32.509 EAL: Detected lcore 26 as core 29 on socket 0 00:04:32.509 EAL: Detected lcore 27 as core 30 on socket 0 00:04:32.509 EAL: Detected lcore 28 as core 0 on socket 1 00:04:32.509 EAL: Detected lcore 29 as core 1 on socket 1 00:04:32.509 EAL: Detected lcore 30 as core 2 on socket 1 00:04:32.509 EAL: Detected lcore 31 as core 3 on socket 1 00:04:32.509 EAL: Detected lcore 32 as core 4 on socket 1 00:04:32.509 EAL: Detected lcore 33 as core 5 on socket 1 00:04:32.509 EAL: Detected lcore 34 as core 6 on socket 1 00:04:32.509 EAL: Detected lcore 35 as core 8 on socket 1 00:04:32.509 EAL: Detected lcore 36 as core 9 on socket 1 00:04:32.509 EAL: Detected lcore 37 as core 10 on socket 1 00:04:32.509 EAL: Detected lcore 38 as core 11 on socket 1 00:04:32.509 EAL: Detected lcore 39 as core 12 on socket 1 00:04:32.509 EAL: Detected lcore 40 as core 13 on socket 1 00:04:32.509 EAL: Detected lcore 41 as core 14 on socket 1 00:04:32.509 EAL: Detected lcore 42 as core 16 on socket 1 00:04:32.509 EAL: Detected lcore 43 as core 17 on socket 1 00:04:32.509 EAL: Detected lcore 44 as core 18 on socket 1 00:04:32.509 EAL: Detected lcore 45 as core 19 on socket 1 00:04:32.509 EAL: Detected lcore 46 as core 20 on socket 1 00:04:32.509 EAL: Detected lcore 47 as core 21 on socket 1 00:04:32.509 EAL: Detected lcore 48 as core 22 on socket 1 00:04:32.509 EAL: Detected lcore 49 as core 24 on socket 1 00:04:32.509 EAL: Detected lcore 50 as core 25 on socket 1 00:04:32.509 EAL: Detected lcore 51 as core 26 on socket 1 00:04:32.509 EAL: Detected lcore 52 as core 27 on socket 1 00:04:32.509 EAL: Detected lcore 53 as core 28 on socket 1 00:04:32.509 EAL: Detected lcore 54 as core 29 on socket 1 00:04:32.509 EAL: Detected lcore 55 as core 30 on socket 1 00:04:32.509 EAL: Detected lcore 56 as core 0 on socket 0 00:04:32.509 EAL: Detected lcore 57 as core 1 on socket 0 00:04:32.509 EAL: Detected lcore 58 as core 2 on socket 0 00:04:32.509 EAL: Detected lcore 59 as core 3 on socket 0 00:04:32.509 EAL: Detected lcore 60 as core 4 on socket 0 00:04:32.509 EAL: Detected lcore 61 as core 5 on socket 0 00:04:32.509 EAL: Detected lcore 62 as core 6 on socket 0 00:04:32.509 EAL: Detected lcore 63 as core 8 on socket 0 00:04:32.509 EAL: Detected lcore 64 as core 9 on socket 0 00:04:32.509 EAL: Detected lcore 65 as core 10 on socket 0 00:04:32.509 EAL: Detected lcore 66 as core 11 on socket 0 00:04:32.509 EAL: Detected lcore 67 as core 12 on socket 0 00:04:32.509 EAL: Detected lcore 68 as core 13 on socket 0 00:04:32.509 EAL: Detected lcore 69 as core 14 on socket 0 00:04:32.509 EAL: Detected lcore 70 as core 16 on socket 0 00:04:32.509 EAL: Detected lcore 71 as core 17 on socket 0 00:04:32.509 EAL: Detected lcore 72 as core 18 on socket 0 00:04:32.509 EAL: Detected lcore 73 as core 19 on socket 0 00:04:32.509 EAL: Detected lcore 74 as core 20 on socket 0 00:04:32.509 EAL: Detected lcore 75 as core 21 on socket 0 00:04:32.509 EAL: Detected lcore 76 as core 22 on socket 0 00:04:32.509 EAL: Detected lcore 77 as core 24 on socket 0 00:04:32.509 EAL: Detected lcore 78 as core 25 on socket 0 00:04:32.509 EAL: Detected lcore 79 as core 26 on socket 0 00:04:32.509 EAL: Detected lcore 80 as core 27 on socket 0 00:04:32.509 EAL: Detected lcore 81 as core 28 on socket 0 00:04:32.509 EAL: Detected lcore 82 as core 29 on socket 0 00:04:32.509 EAL: Detected lcore 83 as core 30 on socket 0 00:04:32.509 EAL: Detected lcore 84 as core 0 on socket 1 00:04:32.509 EAL: Detected lcore 85 as core 1 on socket 1 00:04:32.509 EAL: Detected lcore 86 as core 2 on socket 1 00:04:32.509 EAL: Detected lcore 87 as core 3 on socket 1 00:04:32.509 EAL: Detected lcore 88 as core 4 on socket 1 00:04:32.509 EAL: Detected lcore 89 as core 5 on socket 1 00:04:32.509 EAL: Detected lcore 90 as core 6 on socket 1 00:04:32.509 EAL: Detected lcore 91 as core 8 on socket 1 00:04:32.509 EAL: Detected lcore 92 as core 9 on socket 1 00:04:32.509 EAL: Detected lcore 93 as core 10 on socket 1 00:04:32.509 EAL: Detected lcore 94 as core 11 on socket 1 00:04:32.509 EAL: Detected lcore 95 as core 12 on socket 1 00:04:32.509 EAL: Detected lcore 96 as core 13 on socket 1 00:04:32.509 EAL: Detected lcore 97 as core 14 on socket 1 00:04:32.509 EAL: Detected lcore 98 as core 16 on socket 1 00:04:32.509 EAL: Detected lcore 99 as core 17 on socket 1 00:04:32.509 EAL: Detected lcore 100 as core 18 on socket 1 00:04:32.509 EAL: Detected lcore 101 as core 19 on socket 1 00:04:32.509 EAL: Detected lcore 102 as core 20 on socket 1 00:04:32.509 EAL: Detected lcore 103 as core 21 on socket 1 00:04:32.509 EAL: Detected lcore 104 as core 22 on socket 1 00:04:32.509 EAL: Detected lcore 105 as core 24 on socket 1 00:04:32.509 EAL: Detected lcore 106 as core 25 on socket 1 00:04:32.509 EAL: Detected lcore 107 as core 26 on socket 1 00:04:32.509 EAL: Detected lcore 108 as core 27 on socket 1 00:04:32.509 EAL: Detected lcore 109 as core 28 on socket 1 00:04:32.509 EAL: Detected lcore 110 as core 29 on socket 1 00:04:32.509 EAL: Detected lcore 111 as core 30 on socket 1 00:04:32.509 EAL: Maximum logical cores by configuration: 128 00:04:32.509 EAL: Detected CPU lcores: 112 00:04:32.509 EAL: Detected NUMA nodes: 2 00:04:32.509 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.509 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:32.509 EAL: Checking presence of .so 'librte_eal.so' 00:04:32.509 EAL: Detected static linkage of DPDK 00:04:32.509 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.509 EAL: Bus pci wants IOVA as 'DC' 00:04:32.509 EAL: Buses did not request a specific IOVA mode. 00:04:32.509 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.509 EAL: Selected IOVA mode 'VA' 00:04:32.509 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.509 EAL: Probing VFIO support... 00:04:32.509 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.509 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.509 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.509 EAL: VFIO support initialized 00:04:32.509 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.509 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.509 EAL: Setting up physically contiguous memory... 00:04:32.509 EAL: Setting maximum number of open files to 524288 00:04:32.509 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.509 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.509 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.509 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.509 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.509 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.509 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.509 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.509 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.509 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.509 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.509 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.509 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.509 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.509 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.509 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.509 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.509 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.509 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.509 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.509 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.509 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.509 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.509 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.509 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.509 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.509 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.509 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.509 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.510 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.510 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.510 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.510 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.510 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.510 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.510 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.510 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.510 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.510 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.510 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.510 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.510 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.510 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.510 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.510 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.510 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.510 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.510 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.510 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.510 EAL: Hugepages will be freed exactly as allocated. 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: TSC frequency is ~2500000 KHz 00:04:32.510 EAL: Main lcore 0 is ready (tid=7f570e20ca00;cpuset=[0]) 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 0 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.510 00:04:32.510 00:04:32.510 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.510 http://cunit.sourceforge.net/ 00:04:32.510 00:04:32.510 00:04:32.510 Suite: components_suite 00:04:32.510 Test: vtophys_malloc_test ...passed 00:04:32.510 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.510 EAL: Trying to obtain current memory policy. 00:04:32.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.510 EAL: Restoring previous memory policy: 4 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.510 EAL: request: mp_malloc_sync 00:04:32.510 EAL: No shared files mode enabled, IPC is disabled 00:04:32.510 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.510 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.769 EAL: request: mp_malloc_sync 00:04:32.769 EAL: No shared files mode enabled, IPC is disabled 00:04:32.769 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.769 EAL: Trying to obtain current memory policy. 00:04:32.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.770 EAL: Restoring previous memory policy: 4 00:04:32.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.770 EAL: request: mp_malloc_sync 00:04:32.770 EAL: No shared files mode enabled, IPC is disabled 00:04:32.770 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.770 EAL: request: mp_malloc_sync 00:04:32.770 EAL: No shared files mode enabled, IPC is disabled 00:04:32.770 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.770 EAL: Trying to obtain current memory policy. 00:04:32.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.770 EAL: Restoring previous memory policy: 4 00:04:32.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.770 EAL: request: mp_malloc_sync 00:04:32.770 EAL: No shared files mode enabled, IPC is disabled 00:04:32.770 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.029 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.029 EAL: request: mp_malloc_sync 00:04:33.029 EAL: No shared files mode enabled, IPC is disabled 00:04:33.029 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.029 EAL: Trying to obtain current memory policy. 00:04:33.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.288 EAL: Restoring previous memory policy: 4 00:04:33.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.288 EAL: request: mp_malloc_sync 00:04:33.288 EAL: No shared files mode enabled, IPC is disabled 00:04:33.288 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.288 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.548 EAL: request: mp_malloc_sync 00:04:33.548 EAL: No shared files mode enabled, IPC is disabled 00:04:33.548 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.548 passed 00:04:33.548 00:04:33.548 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.548 suites 1 1 n/a 0 0 00:04:33.548 tests 2 2 2 0 0 00:04:33.548 asserts 497 497 497 0 n/a 00:04:33.548 00:04:33.548 Elapsed time = 0.963 seconds 00:04:33.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.548 EAL: request: mp_malloc_sync 00:04:33.548 EAL: No shared files mode enabled, IPC is disabled 00:04:33.548 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.548 EAL: No shared files mode enabled, IPC is disabled 00:04:33.548 EAL: No shared files mode enabled, IPC is disabled 00:04:33.548 EAL: No shared files mode enabled, IPC is disabled 00:04:33.548 00:04:33.548 real 0m1.084s 00:04:33.548 user 0m0.632s 00:04:33.548 sys 0m0.427s 00:04:33.548 20:18:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.548 20:18:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.548 ************************************ 00:04:33.548 END TEST env_vtophys 00:04:33.548 ************************************ 00:04:33.548 20:18:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.548 20:18:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.548 20:18:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.548 20:18:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.548 20:18:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.548 ************************************ 00:04:33.548 START TEST env_pci 00:04:33.548 ************************************ 00:04:33.548 20:18:25 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.548 00:04:33.548 00:04:33.548 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.548 http://cunit.sourceforge.net/ 00:04:33.548 00:04:33.548 00:04:33.548 Suite: pci 00:04:33.548 Test: pci_hook ...[2024-07-15 20:18:25.884885] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 296301 has claimed it 00:04:33.548 EAL: Cannot find device (10000:00:01.0) 00:04:33.548 EAL: Failed to attach device on primary process 00:04:33.548 passed 00:04:33.548 00:04:33.548 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.548 suites 1 1 n/a 0 0 00:04:33.548 tests 1 1 1 0 0 00:04:33.548 asserts 25 25 25 0 n/a 00:04:33.548 00:04:33.548 Elapsed time = 0.034 seconds 00:04:33.548 00:04:33.548 real 0m0.052s 00:04:33.548 user 0m0.011s 00:04:33.548 sys 0m0.041s 00:04:33.548 20:18:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.548 20:18:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.548 ************************************ 00:04:33.548 END TEST env_pci 00:04:33.548 ************************************ 00:04:33.808 20:18:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.808 20:18:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.808 20:18:25 env -- env/env.sh@15 -- # uname 00:04:33.808 20:18:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.808 20:18:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.808 20:18:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.808 20:18:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:33.808 20:18:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.808 20:18:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.808 ************************************ 00:04:33.808 START TEST env_dpdk_post_init 00:04:33.808 ************************************ 00:04:33.808 20:18:26 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.808 EAL: Detected CPU lcores: 112 00:04:33.808 EAL: Detected NUMA nodes: 2 00:04:33.808 EAL: Detected static linkage of DPDK 00:04:33.808 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.808 EAL: Selected IOVA mode 'VA' 00:04:33.808 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.808 EAL: VFIO support initialized 00:04:33.808 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.808 EAL: Using IOMMU type 1 (Type 1) 00:04:34.746 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:38.038 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:38.038 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:38.607 Starting DPDK initialization... 00:04:38.607 Starting SPDK post initialization... 00:04:38.607 SPDK NVMe probe 00:04:38.607 Attaching to 0000:d8:00.0 00:04:38.607 Attached to 0000:d8:00.0 00:04:38.607 Cleaning up... 00:04:38.607 00:04:38.607 real 0m4.748s 00:04:38.607 user 0m3.547s 00:04:38.607 sys 0m0.445s 00:04:38.607 20:18:30 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.607 20:18:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.607 ************************************ 00:04:38.607 END TEST env_dpdk_post_init 00:04:38.607 ************************************ 00:04:38.607 20:18:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.607 20:18:30 env -- env/env.sh@26 -- # uname 00:04:38.607 20:18:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.607 20:18:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.607 20:18:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.607 20:18:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.607 20:18:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.607 ************************************ 00:04:38.607 START TEST env_mem_callbacks 00:04:38.607 ************************************ 00:04:38.607 20:18:30 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.607 EAL: Detected CPU lcores: 112 00:04:38.607 EAL: Detected NUMA nodes: 2 00:04:38.607 EAL: Detected static linkage of DPDK 00:04:38.607 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.607 EAL: Selected IOVA mode 'VA' 00:04:38.607 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.607 EAL: VFIO support initialized 00:04:38.607 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.607 00:04:38.607 00:04:38.607 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.607 http://cunit.sourceforge.net/ 00:04:38.607 00:04:38.607 00:04:38.607 Suite: memory 00:04:38.607 Test: test ... 00:04:38.607 register 0x200000200000 2097152 00:04:38.607 malloc 3145728 00:04:38.607 register 0x200000400000 4194304 00:04:38.607 buf 0x200000500000 len 3145728 PASSED 00:04:38.607 malloc 64 00:04:38.607 buf 0x2000004fff40 len 64 PASSED 00:04:38.607 malloc 4194304 00:04:38.607 register 0x200000800000 6291456 00:04:38.607 buf 0x200000a00000 len 4194304 PASSED 00:04:38.607 free 0x200000500000 3145728 00:04:38.607 free 0x2000004fff40 64 00:04:38.607 unregister 0x200000400000 4194304 PASSED 00:04:38.607 free 0x200000a00000 4194304 00:04:38.607 unregister 0x200000800000 6291456 PASSED 00:04:38.607 malloc 8388608 00:04:38.607 register 0x200000400000 10485760 00:04:38.607 buf 0x200000600000 len 8388608 PASSED 00:04:38.607 free 0x200000600000 8388608 00:04:38.607 unregister 0x200000400000 10485760 PASSED 00:04:38.607 passed 00:04:38.607 00:04:38.607 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.607 suites 1 1 n/a 0 0 00:04:38.607 tests 1 1 1 0 0 00:04:38.607 asserts 15 15 15 0 n/a 00:04:38.607 00:04:38.607 Elapsed time = 0.006 seconds 00:04:38.607 00:04:38.607 real 0m0.068s 00:04:38.607 user 0m0.022s 00:04:38.607 sys 0m0.046s 00:04:38.607 20:18:30 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.607 20:18:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.607 ************************************ 00:04:38.607 END TEST env_mem_callbacks 00:04:38.607 ************************************ 00:04:38.607 20:18:30 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.607 00:04:38.607 real 0m6.575s 00:04:38.607 user 0m4.480s 00:04:38.607 sys 0m1.355s 00:04:38.607 20:18:30 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.607 20:18:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.607 ************************************ 00:04:38.607 END TEST env 00:04:38.607 ************************************ 00:04:38.867 20:18:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.867 20:18:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.867 20:18:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.867 20:18:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.867 20:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:38.867 ************************************ 00:04:38.867 START TEST rpc 00:04:38.867 ************************************ 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.867 * Looking for test storage... 00:04:38.867 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:38.867 20:18:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=297316 00:04:38.867 20:18:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.867 20:18:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.867 20:18:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 297316 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@829 -- # '[' -z 297316 ']' 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.867 20:18:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.867 [2024-07-15 20:18:31.173254] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:38.867 [2024-07-15 20:18:31.173345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297316 ] 00:04:38.867 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.867 [2024-07-15 20:18:31.244180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.127 [2024-07-15 20:18:31.322779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.127 [2024-07-15 20:18:31.322816] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 297316' to capture a snapshot of events at runtime. 00:04:39.127 [2024-07-15 20:18:31.322825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.127 [2024-07-15 20:18:31.322834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.127 [2024-07-15 20:18:31.322857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid297316 for offline analysis/debug. 00:04:39.127 [2024-07-15 20:18:31.322882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.695 20:18:31 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.695 20:18:31 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.695 20:18:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:39.695 20:18:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:39.695 20:18:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.695 20:18:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.695 20:18:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.695 20:18:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.695 20:18:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.695 ************************************ 00:04:39.695 START TEST rpc_integrity 00:04:39.695 ************************************ 00:04:39.695 20:18:31 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:39.695 20:18:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.695 20:18:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.695 20:18:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.695 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.695 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.695 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.695 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.695 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.695 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.695 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.695 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.695 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.695 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.695 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.695 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.954 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.954 { 00:04:39.954 "name": "Malloc0", 00:04:39.954 "aliases": [ 00:04:39.954 "8db38d78-0394-4502-aaa9-7e689706d584" 00:04:39.954 ], 00:04:39.954 "product_name": "Malloc disk", 00:04:39.954 "block_size": 512, 00:04:39.954 "num_blocks": 16384, 00:04:39.954 "uuid": "8db38d78-0394-4502-aaa9-7e689706d584", 00:04:39.954 "assigned_rate_limits": { 00:04:39.954 "rw_ios_per_sec": 0, 00:04:39.954 "rw_mbytes_per_sec": 0, 00:04:39.954 "r_mbytes_per_sec": 0, 00:04:39.954 "w_mbytes_per_sec": 0 00:04:39.954 }, 00:04:39.954 "claimed": false, 00:04:39.954 "zoned": false, 00:04:39.954 "supported_io_types": { 00:04:39.954 "read": true, 00:04:39.954 "write": true, 00:04:39.954 "unmap": true, 00:04:39.954 "flush": true, 00:04:39.954 "reset": true, 00:04:39.954 "nvme_admin": false, 00:04:39.954 "nvme_io": false, 00:04:39.954 "nvme_io_md": false, 00:04:39.954 "write_zeroes": true, 00:04:39.954 "zcopy": true, 00:04:39.954 "get_zone_info": false, 00:04:39.954 "zone_management": false, 00:04:39.954 "zone_append": false, 00:04:39.954 "compare": false, 00:04:39.954 "compare_and_write": false, 00:04:39.954 "abort": true, 00:04:39.954 "seek_hole": false, 00:04:39.954 "seek_data": false, 00:04:39.954 "copy": true, 00:04:39.954 "nvme_iov_md": false 00:04:39.954 }, 00:04:39.954 "memory_domains": [ 00:04:39.954 { 00:04:39.954 "dma_device_id": "system", 00:04:39.954 "dma_device_type": 1 00:04:39.954 }, 00:04:39.954 { 00:04:39.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.954 "dma_device_type": 2 00:04:39.954 } 00:04:39.954 ], 00:04:39.954 "driver_specific": {} 00:04:39.954 } 00:04:39.954 ]' 00:04:39.954 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.954 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.954 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.954 [2024-07-15 20:18:32.128166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.954 [2024-07-15 20:18:32.128199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.954 [2024-07-15 20:18:32.128215] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x530f260 00:04:39.954 [2024-07-15 20:18:32.128224] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.954 [2024-07-15 20:18:32.129040] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.954 [2024-07-15 20:18:32.129063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.954 Passthru0 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.954 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.954 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.954 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.954 { 00:04:39.954 "name": "Malloc0", 00:04:39.954 "aliases": [ 00:04:39.954 "8db38d78-0394-4502-aaa9-7e689706d584" 00:04:39.954 ], 00:04:39.954 "product_name": "Malloc disk", 00:04:39.954 "block_size": 512, 00:04:39.954 "num_blocks": 16384, 00:04:39.954 "uuid": "8db38d78-0394-4502-aaa9-7e689706d584", 00:04:39.954 "assigned_rate_limits": { 00:04:39.954 "rw_ios_per_sec": 0, 00:04:39.954 "rw_mbytes_per_sec": 0, 00:04:39.954 "r_mbytes_per_sec": 0, 00:04:39.954 "w_mbytes_per_sec": 0 00:04:39.954 }, 00:04:39.954 "claimed": true, 00:04:39.954 "claim_type": "exclusive_write", 00:04:39.954 "zoned": false, 00:04:39.954 "supported_io_types": { 00:04:39.954 "read": true, 00:04:39.954 "write": true, 00:04:39.954 "unmap": true, 00:04:39.954 "flush": true, 00:04:39.954 "reset": true, 00:04:39.954 "nvme_admin": false, 00:04:39.954 "nvme_io": false, 00:04:39.954 "nvme_io_md": false, 00:04:39.954 "write_zeroes": true, 00:04:39.954 "zcopy": true, 00:04:39.954 "get_zone_info": false, 00:04:39.954 "zone_management": false, 00:04:39.954 "zone_append": false, 00:04:39.954 "compare": false, 00:04:39.954 "compare_and_write": false, 00:04:39.954 "abort": true, 00:04:39.954 "seek_hole": false, 00:04:39.954 "seek_data": false, 00:04:39.955 "copy": true, 00:04:39.955 "nvme_iov_md": false 00:04:39.955 }, 00:04:39.955 "memory_domains": [ 00:04:39.955 { 00:04:39.955 "dma_device_id": "system", 00:04:39.955 "dma_device_type": 1 00:04:39.955 }, 00:04:39.955 { 00:04:39.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.955 "dma_device_type": 2 00:04:39.955 } 00:04:39.955 ], 00:04:39.955 "driver_specific": {} 00:04:39.955 }, 00:04:39.955 { 00:04:39.955 "name": "Passthru0", 00:04:39.955 "aliases": [ 00:04:39.955 "84618690-b108-5ea5-9282-38390fbe4b55" 00:04:39.955 ], 00:04:39.955 "product_name": "passthru", 00:04:39.955 "block_size": 512, 00:04:39.955 "num_blocks": 16384, 00:04:39.955 "uuid": "84618690-b108-5ea5-9282-38390fbe4b55", 00:04:39.955 "assigned_rate_limits": { 00:04:39.955 "rw_ios_per_sec": 0, 00:04:39.955 "rw_mbytes_per_sec": 0, 00:04:39.955 "r_mbytes_per_sec": 0, 00:04:39.955 "w_mbytes_per_sec": 0 00:04:39.955 }, 00:04:39.955 "claimed": false, 00:04:39.955 "zoned": false, 00:04:39.955 "supported_io_types": { 00:04:39.955 "read": true, 00:04:39.955 "write": true, 00:04:39.955 "unmap": true, 00:04:39.955 "flush": true, 00:04:39.955 "reset": true, 00:04:39.955 "nvme_admin": false, 00:04:39.955 "nvme_io": false, 00:04:39.955 "nvme_io_md": false, 00:04:39.955 "write_zeroes": true, 00:04:39.955 "zcopy": true, 00:04:39.955 "get_zone_info": false, 00:04:39.955 "zone_management": false, 00:04:39.955 "zone_append": false, 00:04:39.955 "compare": false, 00:04:39.955 "compare_and_write": false, 00:04:39.955 "abort": true, 00:04:39.955 "seek_hole": false, 00:04:39.955 "seek_data": false, 00:04:39.955 "copy": true, 00:04:39.955 "nvme_iov_md": false 00:04:39.955 }, 00:04:39.955 "memory_domains": [ 00:04:39.955 { 00:04:39.955 "dma_device_id": "system", 00:04:39.955 "dma_device_type": 1 00:04:39.955 }, 00:04:39.955 { 00:04:39.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.955 "dma_device_type": 2 00:04:39.955 } 00:04:39.955 ], 00:04:39.955 "driver_specific": { 00:04:39.955 "passthru": { 00:04:39.955 "name": "Passthru0", 00:04:39.955 "base_bdev_name": "Malloc0" 00:04:39.955 } 00:04:39.955 } 00:04:39.955 } 00:04:39.955 ]' 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.955 20:18:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.955 00:04:39.955 real 0m0.281s 00:04:39.955 user 0m0.180s 00:04:39.955 sys 0m0.047s 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.955 20:18:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.955 ************************************ 00:04:39.955 END TEST rpc_integrity 00:04:39.955 ************************************ 00:04:39.955 20:18:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.955 20:18:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.955 20:18:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.955 20:18:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.955 20:18:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.214 ************************************ 00:04:40.214 START TEST rpc_plugins 00:04:40.214 ************************************ 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:40.214 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.214 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.214 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.214 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.214 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.214 { 00:04:40.214 "name": "Malloc1", 00:04:40.214 "aliases": [ 00:04:40.214 "2ccee2db-bee8-4782-a398-6872495ae991" 00:04:40.214 ], 00:04:40.214 "product_name": "Malloc disk", 00:04:40.214 "block_size": 4096, 00:04:40.214 "num_blocks": 256, 00:04:40.214 "uuid": "2ccee2db-bee8-4782-a398-6872495ae991", 00:04:40.214 "assigned_rate_limits": { 00:04:40.214 "rw_ios_per_sec": 0, 00:04:40.214 "rw_mbytes_per_sec": 0, 00:04:40.214 "r_mbytes_per_sec": 0, 00:04:40.214 "w_mbytes_per_sec": 0 00:04:40.214 }, 00:04:40.214 "claimed": false, 00:04:40.214 "zoned": false, 00:04:40.214 "supported_io_types": { 00:04:40.214 "read": true, 00:04:40.214 "write": true, 00:04:40.214 "unmap": true, 00:04:40.214 "flush": true, 00:04:40.214 "reset": true, 00:04:40.214 "nvme_admin": false, 00:04:40.214 "nvme_io": false, 00:04:40.214 "nvme_io_md": false, 00:04:40.214 "write_zeroes": true, 00:04:40.214 "zcopy": true, 00:04:40.214 "get_zone_info": false, 00:04:40.214 "zone_management": false, 00:04:40.214 "zone_append": false, 00:04:40.214 "compare": false, 00:04:40.214 "compare_and_write": false, 00:04:40.214 "abort": true, 00:04:40.214 "seek_hole": false, 00:04:40.214 "seek_data": false, 00:04:40.214 "copy": true, 00:04:40.214 "nvme_iov_md": false 00:04:40.214 }, 00:04:40.214 "memory_domains": [ 00:04:40.214 { 00:04:40.214 "dma_device_id": "system", 00:04:40.214 "dma_device_type": 1 00:04:40.214 }, 00:04:40.214 { 00:04:40.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.214 "dma_device_type": 2 00:04:40.214 } 00:04:40.214 ], 00:04:40.214 "driver_specific": {} 00:04:40.214 } 00:04:40.214 ]' 00:04:40.214 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.215 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.215 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.215 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.215 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.215 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.215 20:18:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.215 00:04:40.215 real 0m0.121s 00:04:40.215 user 0m0.081s 00:04:40.215 sys 0m0.015s 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.215 20:18:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.215 ************************************ 00:04:40.215 END TEST rpc_plugins 00:04:40.215 ************************************ 00:04:40.215 20:18:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.215 20:18:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.215 20:18:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.215 20:18:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.215 20:18:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.215 ************************************ 00:04:40.215 START TEST rpc_trace_cmd_test 00:04:40.215 ************************************ 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.215 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid297316", 00:04:40.215 "tpoint_group_mask": "0x8", 00:04:40.215 "iscsi_conn": { 00:04:40.215 "mask": "0x2", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "scsi": { 00:04:40.215 "mask": "0x4", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "bdev": { 00:04:40.215 "mask": "0x8", 00:04:40.215 "tpoint_mask": "0xffffffffffffffff" 00:04:40.215 }, 00:04:40.215 "nvmf_rdma": { 00:04:40.215 "mask": "0x10", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "nvmf_tcp": { 00:04:40.215 "mask": "0x20", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "ftl": { 00:04:40.215 "mask": "0x40", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "blobfs": { 00:04:40.215 "mask": "0x80", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "dsa": { 00:04:40.215 "mask": "0x200", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "thread": { 00:04:40.215 "mask": "0x400", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "nvme_pcie": { 00:04:40.215 "mask": "0x800", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "iaa": { 00:04:40.215 "mask": "0x1000", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "nvme_tcp": { 00:04:40.215 "mask": "0x2000", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "bdev_nvme": { 00:04:40.215 "mask": "0x4000", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 }, 00:04:40.215 "sock": { 00:04:40.215 "mask": "0x8000", 00:04:40.215 "tpoint_mask": "0x0" 00:04:40.215 } 00:04:40.215 }' 00:04:40.215 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.474 00:04:40.474 real 0m0.206s 00:04:40.474 user 0m0.164s 00:04:40.474 sys 0m0.033s 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.474 20:18:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.474 ************************************ 00:04:40.474 END TEST rpc_trace_cmd_test 00:04:40.474 ************************************ 00:04:40.474 20:18:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.474 20:18:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.474 20:18:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.474 20:18:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.474 20:18:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.474 20:18:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.474 20:18:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.474 ************************************ 00:04:40.474 START TEST rpc_daemon_integrity 00:04:40.474 ************************************ 00:04:40.474 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:40.474 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.474 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.474 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.734 { 00:04:40.734 "name": "Malloc2", 00:04:40.734 "aliases": [ 00:04:40.734 "ff14c8a1-6254-4acc-8ce4-1abb2314c450" 00:04:40.734 ], 00:04:40.734 "product_name": "Malloc disk", 00:04:40.734 "block_size": 512, 00:04:40.734 "num_blocks": 16384, 00:04:40.734 "uuid": "ff14c8a1-6254-4acc-8ce4-1abb2314c450", 00:04:40.734 "assigned_rate_limits": { 00:04:40.734 "rw_ios_per_sec": 0, 00:04:40.734 "rw_mbytes_per_sec": 0, 00:04:40.734 "r_mbytes_per_sec": 0, 00:04:40.734 "w_mbytes_per_sec": 0 00:04:40.734 }, 00:04:40.734 "claimed": false, 00:04:40.734 "zoned": false, 00:04:40.734 "supported_io_types": { 00:04:40.734 "read": true, 00:04:40.734 "write": true, 00:04:40.734 "unmap": true, 00:04:40.734 "flush": true, 00:04:40.734 "reset": true, 00:04:40.734 "nvme_admin": false, 00:04:40.734 "nvme_io": false, 00:04:40.734 "nvme_io_md": false, 00:04:40.734 "write_zeroes": true, 00:04:40.734 "zcopy": true, 00:04:40.734 "get_zone_info": false, 00:04:40.734 "zone_management": false, 00:04:40.734 "zone_append": false, 00:04:40.734 "compare": false, 00:04:40.734 "compare_and_write": false, 00:04:40.734 "abort": true, 00:04:40.734 "seek_hole": false, 00:04:40.734 "seek_data": false, 00:04:40.734 "copy": true, 00:04:40.734 "nvme_iov_md": false 00:04:40.734 }, 00:04:40.734 "memory_domains": [ 00:04:40.734 { 00:04:40.734 "dma_device_id": "system", 00:04:40.734 "dma_device_type": 1 00:04:40.734 }, 00:04:40.734 { 00:04:40.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.734 "dma_device_type": 2 00:04:40.734 } 00:04:40.734 ], 00:04:40.734 "driver_specific": {} 00:04:40.734 } 00:04:40.734 ]' 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.734 [2024-07-15 20:18:32.950273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.734 [2024-07-15 20:18:32.950302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.734 [2024-07-15 20:18:32.950317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5300860 00:04:40.734 [2024-07-15 20:18:32.950326] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.734 [2024-07-15 20:18:32.951023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.734 [2024-07-15 20:18:32.951043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.734 Passthru0 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.734 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.734 { 00:04:40.734 "name": "Malloc2", 00:04:40.734 "aliases": [ 00:04:40.734 "ff14c8a1-6254-4acc-8ce4-1abb2314c450" 00:04:40.734 ], 00:04:40.734 "product_name": "Malloc disk", 00:04:40.734 "block_size": 512, 00:04:40.734 "num_blocks": 16384, 00:04:40.734 "uuid": "ff14c8a1-6254-4acc-8ce4-1abb2314c450", 00:04:40.734 "assigned_rate_limits": { 00:04:40.734 "rw_ios_per_sec": 0, 00:04:40.734 "rw_mbytes_per_sec": 0, 00:04:40.734 "r_mbytes_per_sec": 0, 00:04:40.734 "w_mbytes_per_sec": 0 00:04:40.734 }, 00:04:40.734 "claimed": true, 00:04:40.734 "claim_type": "exclusive_write", 00:04:40.734 "zoned": false, 00:04:40.734 "supported_io_types": { 00:04:40.734 "read": true, 00:04:40.734 "write": true, 00:04:40.734 "unmap": true, 00:04:40.734 "flush": true, 00:04:40.734 "reset": true, 00:04:40.734 "nvme_admin": false, 00:04:40.734 "nvme_io": false, 00:04:40.734 "nvme_io_md": false, 00:04:40.734 "write_zeroes": true, 00:04:40.734 "zcopy": true, 00:04:40.734 "get_zone_info": false, 00:04:40.734 "zone_management": false, 00:04:40.734 "zone_append": false, 00:04:40.734 "compare": false, 00:04:40.734 "compare_and_write": false, 00:04:40.734 "abort": true, 00:04:40.734 "seek_hole": false, 00:04:40.734 "seek_data": false, 00:04:40.734 "copy": true, 00:04:40.734 "nvme_iov_md": false 00:04:40.734 }, 00:04:40.734 "memory_domains": [ 00:04:40.734 { 00:04:40.734 "dma_device_id": "system", 00:04:40.734 "dma_device_type": 1 00:04:40.734 }, 00:04:40.734 { 00:04:40.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.734 "dma_device_type": 2 00:04:40.734 } 00:04:40.734 ], 00:04:40.734 "driver_specific": {} 00:04:40.734 }, 00:04:40.734 { 00:04:40.734 "name": "Passthru0", 00:04:40.734 "aliases": [ 00:04:40.734 "ca9cb3c4-0fff-5f52-9f79-c4bde9d8ed13" 00:04:40.734 ], 00:04:40.734 "product_name": "passthru", 00:04:40.734 "block_size": 512, 00:04:40.734 "num_blocks": 16384, 00:04:40.734 "uuid": "ca9cb3c4-0fff-5f52-9f79-c4bde9d8ed13", 00:04:40.734 "assigned_rate_limits": { 00:04:40.734 "rw_ios_per_sec": 0, 00:04:40.734 "rw_mbytes_per_sec": 0, 00:04:40.734 "r_mbytes_per_sec": 0, 00:04:40.734 "w_mbytes_per_sec": 0 00:04:40.734 }, 00:04:40.734 "claimed": false, 00:04:40.734 "zoned": false, 00:04:40.734 "supported_io_types": { 00:04:40.734 "read": true, 00:04:40.734 "write": true, 00:04:40.734 "unmap": true, 00:04:40.734 "flush": true, 00:04:40.734 "reset": true, 00:04:40.734 "nvme_admin": false, 00:04:40.734 "nvme_io": false, 00:04:40.734 "nvme_io_md": false, 00:04:40.734 "write_zeroes": true, 00:04:40.734 "zcopy": true, 00:04:40.734 "get_zone_info": false, 00:04:40.734 "zone_management": false, 00:04:40.734 "zone_append": false, 00:04:40.734 "compare": false, 00:04:40.734 "compare_and_write": false, 00:04:40.734 "abort": true, 00:04:40.735 "seek_hole": false, 00:04:40.735 "seek_data": false, 00:04:40.735 "copy": true, 00:04:40.735 "nvme_iov_md": false 00:04:40.735 }, 00:04:40.735 "memory_domains": [ 00:04:40.735 { 00:04:40.735 "dma_device_id": "system", 00:04:40.735 "dma_device_type": 1 00:04:40.735 }, 00:04:40.735 { 00:04:40.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.735 "dma_device_type": 2 00:04:40.735 } 00:04:40.735 ], 00:04:40.735 "driver_specific": { 00:04:40.735 "passthru": { 00:04:40.735 "name": "Passthru0", 00:04:40.735 "base_bdev_name": "Malloc2" 00:04:40.735 } 00:04:40.735 } 00:04:40.735 } 00:04:40.735 ]' 00:04:40.735 20:18:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.735 00:04:40.735 real 0m0.233s 00:04:40.735 user 0m0.152s 00:04:40.735 sys 0m0.032s 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.735 20:18:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.735 ************************************ 00:04:40.735 END TEST rpc_daemon_integrity 00:04:40.735 ************************************ 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.994 20:18:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.994 20:18:33 rpc -- rpc/rpc.sh@84 -- # killprocess 297316 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@948 -- # '[' -z 297316 ']' 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@952 -- # kill -0 297316 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 297316 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 297316' 00:04:40.994 killing process with pid 297316 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@967 -- # kill 297316 00:04:40.994 20:18:33 rpc -- common/autotest_common.sh@972 -- # wait 297316 00:04:41.252 00:04:41.252 real 0m2.439s 00:04:41.252 user 0m3.065s 00:04:41.252 sys 0m0.775s 00:04:41.252 20:18:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.252 20:18:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.252 ************************************ 00:04:41.252 END TEST rpc 00:04:41.252 ************************************ 00:04:41.252 20:18:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.252 20:18:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.252 20:18:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.252 20:18:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.252 20:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.252 ************************************ 00:04:41.252 START TEST skip_rpc 00:04:41.252 ************************************ 00:04:41.252 20:18:33 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.510 * Looking for test storage... 00:04:41.510 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:41.510 20:18:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:41.510 20:18:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:41.510 20:18:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.510 20:18:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.510 20:18:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.510 20:18:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.510 ************************************ 00:04:41.510 START TEST skip_rpc 00:04:41.510 ************************************ 00:04:41.510 20:18:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:41.510 20:18:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=298010 00:04:41.510 20:18:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.510 20:18:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.510 20:18:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.510 [2024-07-15 20:18:33.719407] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:41.510 [2024-07-15 20:18:33.719476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298010 ] 00:04:41.510 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.510 [2024-07-15 20:18:33.788037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.510 [2024-07-15 20:18:33.860105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 298010 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 298010 ']' 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 298010 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 298010 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 298010' 00:04:46.771 killing process with pid 298010 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 298010 00:04:46.771 20:18:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 298010 00:04:46.771 00:04:46.771 real 0m5.364s 00:04:46.771 user 0m5.123s 00:04:46.771 sys 0m0.282s 00:04:46.771 20:18:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.771 20:18:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.771 ************************************ 00:04:46.771 END TEST skip_rpc 00:04:46.771 ************************************ 00:04:46.771 20:18:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.771 20:18:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.771 20:18:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.771 20:18:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.771 20:18:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.771 ************************************ 00:04:46.771 START TEST skip_rpc_with_json 00:04:46.771 ************************************ 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=298874 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 298874 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 298874 ']' 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.771 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.030 [2024-07-15 20:18:39.165845] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:47.030 [2024-07-15 20:18:39.165902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298874 ] 00:04:47.030 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.030 [2024-07-15 20:18:39.232784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.030 [2024-07-15 20:18:39.310643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.606 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.606 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:47.606 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.606 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.606 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.606 [2024-07-15 20:18:39.987055] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.865 request: 00:04:47.865 { 00:04:47.865 "trtype": "tcp", 00:04:47.865 "method": "nvmf_get_transports", 00:04:47.865 "req_id": 1 00:04:47.865 } 00:04:47.865 Got JSON-RPC error response 00:04:47.865 response: 00:04:47.865 { 00:04:47.865 "code": -19, 00:04:47.865 "message": "No such device" 00:04:47.865 } 00:04:47.865 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.865 20:18:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.865 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.865 20:18:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.865 [2024-07-15 20:18:39.999148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.865 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.865 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.865 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.865 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.865 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.865 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:47.865 { 00:04:47.865 "subsystems": [ 00:04:47.865 { 00:04:47.865 "subsystem": "scheduler", 00:04:47.865 "config": [ 00:04:47.865 { 00:04:47.865 "method": "framework_set_scheduler", 00:04:47.865 "params": { 00:04:47.865 "name": "static" 00:04:47.865 } 00:04:47.865 } 00:04:47.865 ] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "vmd", 00:04:47.865 "config": [] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "sock", 00:04:47.865 "config": [ 00:04:47.865 { 00:04:47.865 "method": "sock_set_default_impl", 00:04:47.865 "params": { 00:04:47.865 "impl_name": "posix" 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "sock_impl_set_options", 00:04:47.865 "params": { 00:04:47.865 "impl_name": "ssl", 00:04:47.865 "recv_buf_size": 4096, 00:04:47.865 "send_buf_size": 4096, 00:04:47.865 "enable_recv_pipe": true, 00:04:47.865 "enable_quickack": false, 00:04:47.865 "enable_placement_id": 0, 00:04:47.865 "enable_zerocopy_send_server": true, 00:04:47.865 "enable_zerocopy_send_client": false, 00:04:47.865 "zerocopy_threshold": 0, 00:04:47.865 "tls_version": 0, 00:04:47.865 "enable_ktls": false 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "sock_impl_set_options", 00:04:47.865 "params": { 00:04:47.865 "impl_name": "posix", 00:04:47.865 "recv_buf_size": 2097152, 00:04:47.865 "send_buf_size": 2097152, 00:04:47.865 "enable_recv_pipe": true, 00:04:47.865 "enable_quickack": false, 00:04:47.865 "enable_placement_id": 0, 00:04:47.865 "enable_zerocopy_send_server": true, 00:04:47.865 "enable_zerocopy_send_client": false, 00:04:47.865 "zerocopy_threshold": 0, 00:04:47.865 "tls_version": 0, 00:04:47.865 "enable_ktls": false 00:04:47.865 } 00:04:47.865 } 00:04:47.865 ] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "iobuf", 00:04:47.865 "config": [ 00:04:47.865 { 00:04:47.865 "method": "iobuf_set_options", 00:04:47.865 "params": { 00:04:47.865 "small_pool_count": 8192, 00:04:47.865 "large_pool_count": 1024, 00:04:47.865 "small_bufsize": 8192, 00:04:47.865 "large_bufsize": 135168 00:04:47.865 } 00:04:47.865 } 00:04:47.865 ] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "keyring", 00:04:47.865 "config": [] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "vfio_user_target", 00:04:47.865 "config": null 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "accel", 00:04:47.865 "config": [ 00:04:47.865 { 00:04:47.865 "method": "accel_set_options", 00:04:47.865 "params": { 00:04:47.865 "small_cache_size": 128, 00:04:47.865 "large_cache_size": 16, 00:04:47.865 "task_count": 2048, 00:04:47.865 "sequence_count": 2048, 00:04:47.865 "buf_count": 2048 00:04:47.865 } 00:04:47.865 } 00:04:47.865 ] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "bdev", 00:04:47.865 "config": [ 00:04:47.865 { 00:04:47.865 "method": "bdev_set_options", 00:04:47.865 "params": { 00:04:47.865 "bdev_io_pool_size": 65535, 00:04:47.865 "bdev_io_cache_size": 256, 00:04:47.865 "bdev_auto_examine": true, 00:04:47.865 "iobuf_small_cache_size": 128, 00:04:47.865 "iobuf_large_cache_size": 16 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "bdev_raid_set_options", 00:04:47.865 "params": { 00:04:47.865 "process_window_size_kb": 1024 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "bdev_nvme_set_options", 00:04:47.865 "params": { 00:04:47.865 "action_on_timeout": "none", 00:04:47.865 "timeout_us": 0, 00:04:47.865 "timeout_admin_us": 0, 00:04:47.865 "keep_alive_timeout_ms": 10000, 00:04:47.865 "arbitration_burst": 0, 00:04:47.865 "low_priority_weight": 0, 00:04:47.865 "medium_priority_weight": 0, 00:04:47.865 "high_priority_weight": 0, 00:04:47.865 "nvme_adminq_poll_period_us": 10000, 00:04:47.865 "nvme_ioq_poll_period_us": 0, 00:04:47.865 "io_queue_requests": 0, 00:04:47.865 "delay_cmd_submit": true, 00:04:47.865 "transport_retry_count": 4, 00:04:47.865 "bdev_retry_count": 3, 00:04:47.865 "transport_ack_timeout": 0, 00:04:47.865 "ctrlr_loss_timeout_sec": 0, 00:04:47.865 "reconnect_delay_sec": 0, 00:04:47.865 "fast_io_fail_timeout_sec": 0, 00:04:47.865 "disable_auto_failback": false, 00:04:47.865 "generate_uuids": false, 00:04:47.865 "transport_tos": 0, 00:04:47.865 "nvme_error_stat": false, 00:04:47.865 "rdma_srq_size": 0, 00:04:47.865 "io_path_stat": false, 00:04:47.865 "allow_accel_sequence": false, 00:04:47.865 "rdma_max_cq_size": 0, 00:04:47.865 "rdma_cm_event_timeout_ms": 0, 00:04:47.865 "dhchap_digests": [ 00:04:47.865 "sha256", 00:04:47.865 "sha384", 00:04:47.865 "sha512" 00:04:47.865 ], 00:04:47.865 "dhchap_dhgroups": [ 00:04:47.865 "null", 00:04:47.865 "ffdhe2048", 00:04:47.865 "ffdhe3072", 00:04:47.865 "ffdhe4096", 00:04:47.865 "ffdhe6144", 00:04:47.865 "ffdhe8192" 00:04:47.865 ] 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "bdev_nvme_set_hotplug", 00:04:47.865 "params": { 00:04:47.865 "period_us": 100000, 00:04:47.865 "enable": false 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "bdev_iscsi_set_options", 00:04:47.865 "params": { 00:04:47.865 "timeout_sec": 30 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "bdev_wait_for_examine" 00:04:47.865 } 00:04:47.865 ] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "nvmf", 00:04:47.865 "config": [ 00:04:47.865 { 00:04:47.865 "method": "nvmf_set_config", 00:04:47.865 "params": { 00:04:47.865 "discovery_filter": "match_any", 00:04:47.865 "admin_cmd_passthru": { 00:04:47.865 "identify_ctrlr": false 00:04:47.865 } 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "nvmf_set_max_subsystems", 00:04:47.865 "params": { 00:04:47.865 "max_subsystems": 1024 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "nvmf_set_crdt", 00:04:47.865 "params": { 00:04:47.865 "crdt1": 0, 00:04:47.865 "crdt2": 0, 00:04:47.865 "crdt3": 0 00:04:47.865 } 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "method": "nvmf_create_transport", 00:04:47.865 "params": { 00:04:47.865 "trtype": "TCP", 00:04:47.865 "max_queue_depth": 128, 00:04:47.865 "max_io_qpairs_per_ctrlr": 127, 00:04:47.865 "in_capsule_data_size": 4096, 00:04:47.865 "max_io_size": 131072, 00:04:47.865 "io_unit_size": 131072, 00:04:47.865 "max_aq_depth": 128, 00:04:47.865 "num_shared_buffers": 511, 00:04:47.865 "buf_cache_size": 4294967295, 00:04:47.865 "dif_insert_or_strip": false, 00:04:47.865 "zcopy": false, 00:04:47.865 "c2h_success": true, 00:04:47.865 "sock_priority": 0, 00:04:47.865 "abort_timeout_sec": 1, 00:04:47.865 "ack_timeout": 0, 00:04:47.865 "data_wr_pool_size": 0 00:04:47.865 } 00:04:47.865 } 00:04:47.865 ] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "nbd", 00:04:47.865 "config": [] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "ublk", 00:04:47.865 "config": [] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "vhost_blk", 00:04:47.865 "config": [] 00:04:47.865 }, 00:04:47.865 { 00:04:47.865 "subsystem": "scsi", 00:04:47.866 "config": null 00:04:47.866 }, 00:04:47.866 { 00:04:47.866 "subsystem": "iscsi", 00:04:47.866 "config": [ 00:04:47.866 { 00:04:47.866 "method": "iscsi_set_options", 00:04:47.866 "params": { 00:04:47.866 "node_base": "iqn.2016-06.io.spdk", 00:04:47.866 "max_sessions": 128, 00:04:47.866 "max_connections_per_session": 2, 00:04:47.866 "max_queue_depth": 64, 00:04:47.866 "default_time2wait": 2, 00:04:47.866 "default_time2retain": 20, 00:04:47.866 "first_burst_length": 8192, 00:04:47.866 "immediate_data": true, 00:04:47.866 "allow_duplicated_isid": false, 00:04:47.866 "error_recovery_level": 0, 00:04:47.866 "nop_timeout": 60, 00:04:47.866 "nop_in_interval": 30, 00:04:47.866 "disable_chap": false, 00:04:47.866 "require_chap": false, 00:04:47.866 "mutual_chap": false, 00:04:47.866 "chap_group": 0, 00:04:47.866 "max_large_datain_per_connection": 64, 00:04:47.866 "max_r2t_per_connection": 4, 00:04:47.866 "pdu_pool_size": 36864, 00:04:47.866 "immediate_data_pool_size": 16384, 00:04:47.866 "data_out_pool_size": 2048 00:04:47.866 } 00:04:47.866 } 00:04:47.866 ] 00:04:47.866 }, 00:04:47.866 { 00:04:47.866 "subsystem": "vhost_scsi", 00:04:47.866 "config": [] 00:04:47.866 } 00:04:47.866 ] 00:04:47.866 } 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 298874 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 298874 ']' 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 298874 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 298874 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 298874' 00:04:47.866 killing process with pid 298874 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 298874 00:04:47.866 20:18:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 298874 00:04:48.433 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=299152 00:04:48.433 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.433 20:18:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 299152 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 299152 ']' 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 299152 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 299152 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 299152' 00:04:53.697 killing process with pid 299152 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 299152 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 299152 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:53.697 00:04:53.697 real 0m6.759s 00:04:53.697 user 0m6.561s 00:04:53.697 sys 0m0.635s 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.697 ************************************ 00:04:53.697 END TEST skip_rpc_with_json 00:04:53.697 ************************************ 00:04:53.697 20:18:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.697 20:18:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.697 20:18:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.697 20:18:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.697 20:18:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.697 ************************************ 00:04:53.697 START TEST skip_rpc_with_delay 00:04:53.697 ************************************ 00:04:53.697 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.698 20:18:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.698 [2024-07-15 20:18:46.012661] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.698 [2024-07-15 20:18:46.012799] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.698 20:18:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:53.698 20:18:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.698 20:18:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:53.698 20:18:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.698 00:04:53.698 real 0m0.045s 00:04:53.698 user 0m0.021s 00:04:53.698 sys 0m0.024s 00:04:53.698 20:18:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.698 20:18:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.698 ************************************ 00:04:53.698 END TEST skip_rpc_with_delay 00:04:53.698 ************************************ 00:04:53.698 20:18:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.698 20:18:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.698 20:18:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.698 20:18:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.698 20:18:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.698 20:18:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.698 20:18:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.955 ************************************ 00:04:53.955 START TEST exit_on_failed_rpc_init 00:04:53.955 ************************************ 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=300259 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 300259 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 300259 ']' 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.955 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.955 [2024-07-15 20:18:46.139712] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:53.955 [2024-07-15 20:18:46.139790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300259 ] 00:04:53.955 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.955 [2024-07-15 20:18:46.209062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.955 [2024-07-15 20:18:46.285847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.889 20:18:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.889 [2024-07-15 20:18:46.984983] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:54.889 [2024-07-15 20:18:46.985048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300293 ] 00:04:54.889 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.889 [2024-07-15 20:18:47.052817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.889 [2024-07-15 20:18:47.125141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.889 [2024-07-15 20:18:47.125221] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.889 [2024-07-15 20:18:47.125233] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.889 [2024-07-15 20:18:47.125240] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 300259 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 300259 ']' 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 300259 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 300259 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 300259' 00:04:54.889 killing process with pid 300259 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 300259 00:04:54.889 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 300259 00:04:55.454 00:04:55.454 real 0m1.431s 00:04:55.454 user 0m1.591s 00:04:55.454 sys 0m0.444s 00:04:55.454 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.454 20:18:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.454 ************************************ 00:04:55.454 END TEST exit_on_failed_rpc_init 00:04:55.454 ************************************ 00:04:55.454 20:18:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:55.454 20:18:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:55.454 00:04:55.454 real 0m14.041s 00:04:55.454 user 0m13.443s 00:04:55.454 sys 0m1.713s 00:04:55.454 20:18:47 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.454 20:18:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.454 ************************************ 00:04:55.454 END TEST skip_rpc 00:04:55.454 ************************************ 00:04:55.454 20:18:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.454 20:18:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.454 20:18:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.454 20:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.454 20:18:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.454 ************************************ 00:04:55.454 START TEST rpc_client 00:04:55.454 ************************************ 00:04:55.454 20:18:47 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.454 * Looking for test storage... 00:04:55.454 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:55.454 20:18:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:55.454 OK 00:04:55.454 20:18:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.454 00:04:55.454 real 0m0.131s 00:04:55.454 user 0m0.052s 00:04:55.454 sys 0m0.089s 00:04:55.454 20:18:47 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.454 20:18:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.454 ************************************ 00:04:55.454 END TEST rpc_client 00:04:55.454 ************************************ 00:04:55.713 20:18:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.713 20:18:47 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.713 20:18:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.713 20:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.713 20:18:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 ************************************ 00:04:55.713 START TEST json_config 00:04:55.713 ************************************ 00:04:55.713 20:18:47 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:55.713 20:18:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.713 20:18:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.713 20:18:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.713 20:18:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.713 20:18:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.713 20:18:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.713 20:18:47 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.713 20:18:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@47 -- # : 0 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.713 20:18:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:55.713 WARNING: No tests are enabled so not running JSON configuration tests 00:04:55.713 20:18:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:55.713 00:04:55.713 real 0m0.077s 00:04:55.713 user 0m0.029s 00:04:55.713 sys 0m0.048s 00:04:55.713 20:18:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.713 20:18:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 ************************************ 00:04:55.713 END TEST json_config 00:04:55.713 ************************************ 00:04:55.713 20:18:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.713 20:18:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.713 20:18:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.713 20:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.713 20:18:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.713 ************************************ 00:04:55.713 START TEST json_config_extra_key 00:04:55.713 ************************************ 00:04:55.713 20:18:48 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.972 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:55.972 20:18:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.972 20:18:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.972 20:18:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.972 20:18:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.972 20:18:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.972 20:18:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.972 20:18:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.972 20:18:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.972 20:18:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.973 20:18:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.973 20:18:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.973 20:18:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.973 20:18:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.973 20:18:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.973 20:18:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:55.973 INFO: launching applications... 00:04:55.973 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=300688 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.973 Waiting for target to run... 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 300688 /var/tmp/spdk_tgt.sock 00:04:55.973 20:18:48 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 300688 ']' 00:04:55.973 20:18:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:55.973 20:18:48 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.973 20:18:48 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.973 20:18:48 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.973 20:18:48 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.973 20:18:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.973 [2024-07-15 20:18:48.157713] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:55.973 [2024-07-15 20:18:48.157762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300688 ] 00:04:55.973 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.231 [2024-07-15 20:18:48.422041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.231 [2024-07-15 20:18:48.485448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.795 20:18:48 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.795 20:18:48 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.795 00:04:56.795 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.795 INFO: shutting down applications... 00:04:56.795 20:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 300688 ]] 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 300688 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 300688 00:04:56.795 20:18:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 300688 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.362 20:18:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.362 SPDK target shutdown done 00:04:57.362 20:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.362 Success 00:04:57.362 00:04:57.362 real 0m1.455s 00:04:57.362 user 0m1.214s 00:04:57.362 sys 0m0.380s 00:04:57.362 20:18:49 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.362 20:18:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 ************************************ 00:04:57.362 END TEST json_config_extra_key 00:04:57.362 ************************************ 00:04:57.362 20:18:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.362 20:18:49 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.362 20:18:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.362 20:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.362 20:18:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 ************************************ 00:04:57.362 START TEST alias_rpc 00:04:57.362 ************************************ 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.362 * Looking for test storage... 00:04:57.362 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:57.362 20:18:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.362 20:18:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=301006 00:04:57.362 20:18:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.362 20:18:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 301006 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 301006 ']' 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.362 20:18:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 [2024-07-15 20:18:49.703076] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:57.362 [2024-07-15 20:18:49.703154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301006 ] 00:04:57.362 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.637 [2024-07-15 20:18:49.772350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.637 [2024-07-15 20:18:49.845431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.211 20:18:50 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.211 20:18:50 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:58.211 20:18:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:58.468 20:18:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 301006 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 301006 ']' 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 301006 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 301006 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 301006' 00:04:58.468 killing process with pid 301006 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@967 -- # kill 301006 00:04:58.468 20:18:50 alias_rpc -- common/autotest_common.sh@972 -- # wait 301006 00:04:58.726 00:04:58.726 real 0m1.509s 00:04:58.726 user 0m1.603s 00:04:58.726 sys 0m0.453s 00:04:58.726 20:18:51 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.726 20:18:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.726 ************************************ 00:04:58.726 END TEST alias_rpc 00:04:58.726 ************************************ 00:04:58.984 20:18:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.984 20:18:51 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:58.984 20:18:51 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.984 20:18:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.984 20:18:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.984 20:18:51 -- common/autotest_common.sh@10 -- # set +x 00:04:58.984 ************************************ 00:04:58.984 START TEST spdkcli_tcp 00:04:58.984 ************************************ 00:04:58.984 20:18:51 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.984 * Looking for test storage... 00:04:58.984 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:58.984 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:58.984 20:18:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:58.984 20:18:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:58.984 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.984 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.984 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.985 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.985 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=301328 00:04:58.985 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.985 20:18:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 301328 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 301328 ']' 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.985 20:18:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.985 [2024-07-15 20:18:51.291383] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:04:58.985 [2024-07-15 20:18:51.291476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301328 ] 00:04:58.985 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.985 [2024-07-15 20:18:51.360421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.242 [2024-07-15 20:18:51.434064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.242 [2024-07-15 20:18:51.434066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.806 20:18:52 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.806 20:18:52 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:59.806 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=301589 00:04:59.806 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.806 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.065 [ 00:05:00.065 "spdk_get_version", 00:05:00.065 "rpc_get_methods", 00:05:00.065 "trace_get_info", 00:05:00.065 "trace_get_tpoint_group_mask", 00:05:00.065 "trace_disable_tpoint_group", 00:05:00.065 "trace_enable_tpoint_group", 00:05:00.065 "trace_clear_tpoint_mask", 00:05:00.065 "trace_set_tpoint_mask", 00:05:00.065 "vfu_tgt_set_base_path", 00:05:00.065 "framework_get_pci_devices", 00:05:00.065 "framework_get_config", 00:05:00.065 "framework_get_subsystems", 00:05:00.065 "keyring_get_keys", 00:05:00.065 "iobuf_get_stats", 00:05:00.065 "iobuf_set_options", 00:05:00.065 "sock_get_default_impl", 00:05:00.065 "sock_set_default_impl", 00:05:00.065 "sock_impl_set_options", 00:05:00.065 "sock_impl_get_options", 00:05:00.065 "vmd_rescan", 00:05:00.065 "vmd_remove_device", 00:05:00.065 "vmd_enable", 00:05:00.065 "accel_get_stats", 00:05:00.065 "accel_set_options", 00:05:00.065 "accel_set_driver", 00:05:00.065 "accel_crypto_key_destroy", 00:05:00.065 "accel_crypto_keys_get", 00:05:00.065 "accel_crypto_key_create", 00:05:00.065 "accel_assign_opc", 00:05:00.065 "accel_get_module_info", 00:05:00.065 "accel_get_opc_assignments", 00:05:00.065 "notify_get_notifications", 00:05:00.065 "notify_get_types", 00:05:00.065 "bdev_get_histogram", 00:05:00.065 "bdev_enable_histogram", 00:05:00.065 "bdev_set_qos_limit", 00:05:00.065 "bdev_set_qd_sampling_period", 00:05:00.065 "bdev_get_bdevs", 00:05:00.065 "bdev_reset_iostat", 00:05:00.065 "bdev_get_iostat", 00:05:00.065 "bdev_examine", 00:05:00.065 "bdev_wait_for_examine", 00:05:00.065 "bdev_set_options", 00:05:00.065 "scsi_get_devices", 00:05:00.065 "thread_set_cpumask", 00:05:00.065 "framework_get_governor", 00:05:00.065 "framework_get_scheduler", 00:05:00.065 "framework_set_scheduler", 00:05:00.065 "framework_get_reactors", 00:05:00.065 "thread_get_io_channels", 00:05:00.065 "thread_get_pollers", 00:05:00.065 "thread_get_stats", 00:05:00.065 "framework_monitor_context_switch", 00:05:00.065 "spdk_kill_instance", 00:05:00.065 "log_enable_timestamps", 00:05:00.065 "log_get_flags", 00:05:00.065 "log_clear_flag", 00:05:00.065 "log_set_flag", 00:05:00.065 "log_get_level", 00:05:00.065 "log_set_level", 00:05:00.065 "log_get_print_level", 00:05:00.065 "log_set_print_level", 00:05:00.065 "framework_enable_cpumask_locks", 00:05:00.065 "framework_disable_cpumask_locks", 00:05:00.065 "framework_wait_init", 00:05:00.065 "framework_start_init", 00:05:00.065 "virtio_blk_create_transport", 00:05:00.065 "virtio_blk_get_transports", 00:05:00.065 "vhost_controller_set_coalescing", 00:05:00.065 "vhost_get_controllers", 00:05:00.065 "vhost_delete_controller", 00:05:00.065 "vhost_create_blk_controller", 00:05:00.065 "vhost_scsi_controller_remove_target", 00:05:00.065 "vhost_scsi_controller_add_target", 00:05:00.065 "vhost_start_scsi_controller", 00:05:00.065 "vhost_create_scsi_controller", 00:05:00.065 "ublk_recover_disk", 00:05:00.065 "ublk_get_disks", 00:05:00.065 "ublk_stop_disk", 00:05:00.065 "ublk_start_disk", 00:05:00.065 "ublk_destroy_target", 00:05:00.065 "ublk_create_target", 00:05:00.065 "nbd_get_disks", 00:05:00.065 "nbd_stop_disk", 00:05:00.065 "nbd_start_disk", 00:05:00.065 "env_dpdk_get_mem_stats", 00:05:00.065 "nvmf_stop_mdns_prr", 00:05:00.065 "nvmf_publish_mdns_prr", 00:05:00.065 "nvmf_subsystem_get_listeners", 00:05:00.065 "nvmf_subsystem_get_qpairs", 00:05:00.065 "nvmf_subsystem_get_controllers", 00:05:00.065 "nvmf_get_stats", 00:05:00.065 "nvmf_get_transports", 00:05:00.065 "nvmf_create_transport", 00:05:00.065 "nvmf_get_targets", 00:05:00.065 "nvmf_delete_target", 00:05:00.065 "nvmf_create_target", 00:05:00.065 "nvmf_subsystem_allow_any_host", 00:05:00.065 "nvmf_subsystem_remove_host", 00:05:00.065 "nvmf_subsystem_add_host", 00:05:00.065 "nvmf_ns_remove_host", 00:05:00.065 "nvmf_ns_add_host", 00:05:00.065 "nvmf_subsystem_remove_ns", 00:05:00.065 "nvmf_subsystem_add_ns", 00:05:00.065 "nvmf_subsystem_listener_set_ana_state", 00:05:00.065 "nvmf_discovery_get_referrals", 00:05:00.065 "nvmf_discovery_remove_referral", 00:05:00.065 "nvmf_discovery_add_referral", 00:05:00.065 "nvmf_subsystem_remove_listener", 00:05:00.065 "nvmf_subsystem_add_listener", 00:05:00.065 "nvmf_delete_subsystem", 00:05:00.065 "nvmf_create_subsystem", 00:05:00.065 "nvmf_get_subsystems", 00:05:00.065 "nvmf_set_crdt", 00:05:00.065 "nvmf_set_config", 00:05:00.065 "nvmf_set_max_subsystems", 00:05:00.065 "iscsi_get_histogram", 00:05:00.065 "iscsi_enable_histogram", 00:05:00.065 "iscsi_set_options", 00:05:00.065 "iscsi_get_auth_groups", 00:05:00.065 "iscsi_auth_group_remove_secret", 00:05:00.065 "iscsi_auth_group_add_secret", 00:05:00.065 "iscsi_delete_auth_group", 00:05:00.065 "iscsi_create_auth_group", 00:05:00.065 "iscsi_set_discovery_auth", 00:05:00.065 "iscsi_get_options", 00:05:00.065 "iscsi_target_node_request_logout", 00:05:00.065 "iscsi_target_node_set_redirect", 00:05:00.065 "iscsi_target_node_set_auth", 00:05:00.065 "iscsi_target_node_add_lun", 00:05:00.065 "iscsi_get_stats", 00:05:00.065 "iscsi_get_connections", 00:05:00.065 "iscsi_portal_group_set_auth", 00:05:00.065 "iscsi_start_portal_group", 00:05:00.065 "iscsi_delete_portal_group", 00:05:00.065 "iscsi_create_portal_group", 00:05:00.065 "iscsi_get_portal_groups", 00:05:00.065 "iscsi_delete_target_node", 00:05:00.065 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.065 "iscsi_target_node_add_pg_ig_maps", 00:05:00.065 "iscsi_create_target_node", 00:05:00.065 "iscsi_get_target_nodes", 00:05:00.065 "iscsi_delete_initiator_group", 00:05:00.065 "iscsi_initiator_group_remove_initiators", 00:05:00.065 "iscsi_initiator_group_add_initiators", 00:05:00.065 "iscsi_create_initiator_group", 00:05:00.065 "iscsi_get_initiator_groups", 00:05:00.065 "keyring_linux_set_options", 00:05:00.065 "keyring_file_remove_key", 00:05:00.065 "keyring_file_add_key", 00:05:00.065 "vfu_virtio_create_scsi_endpoint", 00:05:00.065 "vfu_virtio_scsi_remove_target", 00:05:00.065 "vfu_virtio_scsi_add_target", 00:05:00.065 "vfu_virtio_create_blk_endpoint", 00:05:00.065 "vfu_virtio_delete_endpoint", 00:05:00.065 "iaa_scan_accel_module", 00:05:00.065 "dsa_scan_accel_module", 00:05:00.065 "ioat_scan_accel_module", 00:05:00.065 "accel_error_inject_error", 00:05:00.065 "bdev_iscsi_delete", 00:05:00.065 "bdev_iscsi_create", 00:05:00.065 "bdev_iscsi_set_options", 00:05:00.065 "bdev_virtio_attach_controller", 00:05:00.065 "bdev_virtio_scsi_get_devices", 00:05:00.065 "bdev_virtio_detach_controller", 00:05:00.065 "bdev_virtio_blk_set_hotplug", 00:05:00.065 "bdev_ftl_set_property", 00:05:00.065 "bdev_ftl_get_properties", 00:05:00.065 "bdev_ftl_get_stats", 00:05:00.065 "bdev_ftl_unmap", 00:05:00.065 "bdev_ftl_unload", 00:05:00.065 "bdev_ftl_delete", 00:05:00.065 "bdev_ftl_load", 00:05:00.065 "bdev_ftl_create", 00:05:00.065 "bdev_aio_delete", 00:05:00.065 "bdev_aio_rescan", 00:05:00.065 "bdev_aio_create", 00:05:00.065 "blobfs_create", 00:05:00.065 "blobfs_detect", 00:05:00.066 "blobfs_set_cache_size", 00:05:00.066 "bdev_zone_block_delete", 00:05:00.066 "bdev_zone_block_create", 00:05:00.066 "bdev_delay_delete", 00:05:00.066 "bdev_delay_create", 00:05:00.066 "bdev_delay_update_latency", 00:05:00.066 "bdev_split_delete", 00:05:00.066 "bdev_split_create", 00:05:00.066 "bdev_error_inject_error", 00:05:00.066 "bdev_error_delete", 00:05:00.066 "bdev_error_create", 00:05:00.066 "bdev_raid_set_options", 00:05:00.066 "bdev_raid_remove_base_bdev", 00:05:00.066 "bdev_raid_add_base_bdev", 00:05:00.066 "bdev_raid_delete", 00:05:00.066 "bdev_raid_create", 00:05:00.066 "bdev_raid_get_bdevs", 00:05:00.066 "bdev_lvol_set_parent_bdev", 00:05:00.066 "bdev_lvol_set_parent", 00:05:00.066 "bdev_lvol_check_shallow_copy", 00:05:00.066 "bdev_lvol_start_shallow_copy", 00:05:00.066 "bdev_lvol_grow_lvstore", 00:05:00.066 "bdev_lvol_get_lvols", 00:05:00.066 "bdev_lvol_get_lvstores", 00:05:00.066 "bdev_lvol_delete", 00:05:00.066 "bdev_lvol_set_read_only", 00:05:00.066 "bdev_lvol_resize", 00:05:00.066 "bdev_lvol_decouple_parent", 00:05:00.066 "bdev_lvol_inflate", 00:05:00.066 "bdev_lvol_rename", 00:05:00.066 "bdev_lvol_clone_bdev", 00:05:00.066 "bdev_lvol_clone", 00:05:00.066 "bdev_lvol_snapshot", 00:05:00.066 "bdev_lvol_create", 00:05:00.066 "bdev_lvol_delete_lvstore", 00:05:00.066 "bdev_lvol_rename_lvstore", 00:05:00.066 "bdev_lvol_create_lvstore", 00:05:00.066 "bdev_passthru_delete", 00:05:00.066 "bdev_passthru_create", 00:05:00.066 "bdev_nvme_cuse_unregister", 00:05:00.066 "bdev_nvme_cuse_register", 00:05:00.066 "bdev_opal_new_user", 00:05:00.066 "bdev_opal_set_lock_state", 00:05:00.066 "bdev_opal_delete", 00:05:00.066 "bdev_opal_get_info", 00:05:00.066 "bdev_opal_create", 00:05:00.066 "bdev_nvme_opal_revert", 00:05:00.066 "bdev_nvme_opal_init", 00:05:00.066 "bdev_nvme_send_cmd", 00:05:00.066 "bdev_nvme_get_path_iostat", 00:05:00.066 "bdev_nvme_get_mdns_discovery_info", 00:05:00.066 "bdev_nvme_stop_mdns_discovery", 00:05:00.066 "bdev_nvme_start_mdns_discovery", 00:05:00.066 "bdev_nvme_set_multipath_policy", 00:05:00.066 "bdev_nvme_set_preferred_path", 00:05:00.066 "bdev_nvme_get_io_paths", 00:05:00.066 "bdev_nvme_remove_error_injection", 00:05:00.066 "bdev_nvme_add_error_injection", 00:05:00.066 "bdev_nvme_get_discovery_info", 00:05:00.066 "bdev_nvme_stop_discovery", 00:05:00.066 "bdev_nvme_start_discovery", 00:05:00.066 "bdev_nvme_get_controller_health_info", 00:05:00.066 "bdev_nvme_disable_controller", 00:05:00.066 "bdev_nvme_enable_controller", 00:05:00.066 "bdev_nvme_reset_controller", 00:05:00.066 "bdev_nvme_get_transport_statistics", 00:05:00.066 "bdev_nvme_apply_firmware", 00:05:00.066 "bdev_nvme_detach_controller", 00:05:00.066 "bdev_nvme_get_controllers", 00:05:00.066 "bdev_nvme_attach_controller", 00:05:00.066 "bdev_nvme_set_hotplug", 00:05:00.066 "bdev_nvme_set_options", 00:05:00.066 "bdev_null_resize", 00:05:00.066 "bdev_null_delete", 00:05:00.066 "bdev_null_create", 00:05:00.066 "bdev_malloc_delete", 00:05:00.066 "bdev_malloc_create" 00:05:00.066 ] 00:05:00.066 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.066 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.066 20:18:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 301328 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 301328 ']' 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 301328 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 301328 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 301328' 00:05:00.066 killing process with pid 301328 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 301328 00:05:00.066 20:18:52 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 301328 00:05:00.324 00:05:00.324 real 0m1.517s 00:05:00.324 user 0m2.795s 00:05:00.324 sys 0m0.476s 00:05:00.324 20:18:52 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.324 20:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.324 ************************************ 00:05:00.324 END TEST spdkcli_tcp 00:05:00.324 ************************************ 00:05:00.582 20:18:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.582 20:18:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.582 20:18:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.582 20:18:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.582 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.582 ************************************ 00:05:00.582 START TEST dpdk_mem_utility 00:05:00.582 ************************************ 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.582 * Looking for test storage... 00:05:00.582 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:00.582 20:18:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.582 20:18:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=301671 00:05:00.582 20:18:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 301671 00:05:00.582 20:18:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 301671 ']' 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.582 20:18:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.582 [2024-07-15 20:18:52.876789] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:00.582 [2024-07-15 20:18:52.876876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301671 ] 00:05:00.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.582 [2024-07-15 20:18:52.945221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.839 [2024-07-15 20:18:53.025044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.401 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.401 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:01.401 20:18:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.401 20:18:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.401 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.401 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.401 { 00:05:01.401 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.401 } 00:05:01.401 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.401 20:18:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:01.401 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:01.401 1 heaps totaling size 814.000000 MiB 00:05:01.401 size: 814.000000 MiB heap id: 0 00:05:01.401 end heaps---------- 00:05:01.401 8 mempools totaling size 598.116089 MiB 00:05:01.401 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.401 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.401 size: 84.521057 MiB name: bdev_io_301671 00:05:01.401 size: 51.011292 MiB name: evtpool_301671 00:05:01.401 size: 50.003479 MiB name: msgpool_301671 00:05:01.401 size: 21.763794 MiB name: PDU_Pool 00:05:01.401 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.401 size: 0.026123 MiB name: Session_Pool 00:05:01.401 end mempools------- 00:05:01.401 6 memzones totaling size 4.142822 MiB 00:05:01.401 size: 1.000366 MiB name: RG_ring_0_301671 00:05:01.401 size: 1.000366 MiB name: RG_ring_1_301671 00:05:01.401 size: 1.000366 MiB name: RG_ring_4_301671 00:05:01.401 size: 1.000366 MiB name: RG_ring_5_301671 00:05:01.401 size: 0.125366 MiB name: RG_ring_2_301671 00:05:01.401 size: 0.015991 MiB name: RG_ring_3_301671 00:05:01.401 end memzones------- 00:05:01.401 20:18:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.658 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:01.658 list of free elements. size: 12.519348 MiB 00:05:01.658 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:01.658 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:01.658 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:01.658 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:01.658 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:01.658 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:01.658 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:01.658 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:01.658 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:01.658 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:01.658 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:01.658 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:01.658 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:01.658 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:01.658 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:01.658 list of standard malloc elements. size: 199.218079 MiB 00:05:01.658 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:01.658 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:01.658 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:01.658 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:01.658 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:01.658 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:01.658 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:01.658 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:01.658 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:01.658 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:01.658 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:01.658 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:01.658 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:01.658 list of memzone associated elements. size: 602.262573 MiB 00:05:01.658 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:01.658 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.658 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:01.658 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.658 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:01.658 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_301671_0 00:05:01.658 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:01.658 associated memzone info: size: 48.002930 MiB name: MP_evtpool_301671_0 00:05:01.658 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:01.658 associated memzone info: size: 48.002930 MiB name: MP_msgpool_301671_0 00:05:01.658 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:01.658 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.658 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:01.658 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.658 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:01.658 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_301671 00:05:01.658 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:01.658 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_301671 00:05:01.658 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:01.658 associated memzone info: size: 1.007996 MiB name: MP_evtpool_301671 00:05:01.658 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:01.658 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.658 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:01.658 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.658 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:01.658 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.658 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:01.658 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.658 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:01.658 associated memzone info: size: 1.000366 MiB name: RG_ring_0_301671 00:05:01.658 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:01.658 associated memzone info: size: 1.000366 MiB name: RG_ring_1_301671 00:05:01.658 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:01.658 associated memzone info: size: 1.000366 MiB name: RG_ring_4_301671 00:05:01.658 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:01.658 associated memzone info: size: 1.000366 MiB name: RG_ring_5_301671 00:05:01.658 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:01.658 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_301671 00:05:01.658 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:01.658 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.658 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:01.658 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.658 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:01.658 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.658 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:01.658 associated memzone info: size: 0.125366 MiB name: RG_ring_2_301671 00:05:01.658 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:01.658 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.658 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:01.658 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.658 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:01.658 associated memzone info: size: 0.015991 MiB name: RG_ring_3_301671 00:05:01.658 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:01.658 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.658 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:01.658 associated memzone info: size: 0.000183 MiB name: MP_msgpool_301671 00:05:01.658 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:01.658 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_301671 00:05:01.658 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:01.658 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.658 20:18:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.658 20:18:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 301671 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 301671 ']' 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 301671 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 301671 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 301671' 00:05:01.658 killing process with pid 301671 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 301671 00:05:01.658 20:18:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 301671 00:05:01.917 00:05:01.917 real 0m1.407s 00:05:01.917 user 0m1.452s 00:05:01.917 sys 0m0.431s 00:05:01.917 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.917 20:18:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.917 ************************************ 00:05:01.917 END TEST dpdk_mem_utility 00:05:01.917 ************************************ 00:05:01.917 20:18:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.917 20:18:54 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:01.917 20:18:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.917 20:18:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.917 20:18:54 -- common/autotest_common.sh@10 -- # set +x 00:05:01.917 ************************************ 00:05:01.917 START TEST event 00:05:01.917 ************************************ 00:05:01.917 20:18:54 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:02.175 * Looking for test storage... 00:05:02.175 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:02.175 20:18:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:02.175 20:18:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.175 20:18:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.175 20:18:54 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:02.175 20:18:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.175 20:18:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.175 ************************************ 00:05:02.175 START TEST event_perf 00:05:02.175 ************************************ 00:05:02.175 20:18:54 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.175 Running I/O for 1 seconds...[2024-07-15 20:18:54.401454] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:02.175 [2024-07-15 20:18:54.401536] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301990 ] 00:05:02.175 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.175 [2024-07-15 20:18:54.470928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.175 [2024-07-15 20:18:54.546008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.175 [2024-07-15 20:18:54.546107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.175 [2024-07-15 20:18:54.546195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.175 [2024-07-15 20:18:54.546197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.551 Running I/O for 1 seconds... 00:05:03.551 lcore 0: 188625 00:05:03.551 lcore 1: 188626 00:05:03.551 lcore 2: 188627 00:05:03.551 lcore 3: 188625 00:05:03.551 done. 00:05:03.551 00:05:03.551 real 0m1.225s 00:05:03.551 user 0m4.132s 00:05:03.551 sys 0m0.090s 00:05:03.551 20:18:55 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.551 20:18:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 END TEST event_perf 00:05:03.551 ************************************ 00:05:03.551 20:18:55 event -- common/autotest_common.sh@1142 -- # return 0 00:05:03.551 20:18:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.551 20:18:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:03.551 20:18:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.551 20:18:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 START TEST event_reactor 00:05:03.551 ************************************ 00:05:03.551 20:18:55 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.551 [2024-07-15 20:18:55.713108] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:03.551 [2024-07-15 20:18:55.713192] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302275 ] 00:05:03.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.551 [2024-07-15 20:18:55.785840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.551 [2024-07-15 20:18:55.858065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.928 test_start 00:05:04.928 oneshot 00:05:04.928 tick 100 00:05:04.928 tick 100 00:05:04.928 tick 250 00:05:04.928 tick 100 00:05:04.928 tick 100 00:05:04.928 tick 100 00:05:04.928 tick 250 00:05:04.928 tick 500 00:05:04.928 tick 100 00:05:04.928 tick 100 00:05:04.928 tick 250 00:05:04.928 tick 100 00:05:04.928 tick 100 00:05:04.928 test_end 00:05:04.928 00:05:04.928 real 0m1.225s 00:05:04.928 user 0m1.129s 00:05:04.928 sys 0m0.092s 00:05:04.928 20:18:56 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.928 20:18:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.928 ************************************ 00:05:04.928 END TEST event_reactor 00:05:04.928 ************************************ 00:05:04.928 20:18:56 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.928 20:18:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.928 20:18:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:04.928 20:18:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.928 20:18:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.928 ************************************ 00:05:04.928 START TEST event_reactor_perf 00:05:04.928 ************************************ 00:05:04.928 20:18:57 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.928 [2024-07-15 20:18:57.018739] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:04.928 [2024-07-15 20:18:57.018827] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302566 ] 00:05:04.928 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.928 [2024-07-15 20:18:57.089992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.928 [2024-07-15 20:18:57.159171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.863 test_start 00:05:05.863 test_end 00:05:05.863 Performance: 975548 events per second 00:05:05.863 00:05:05.863 real 0m1.221s 00:05:05.863 user 0m1.131s 00:05:05.863 sys 0m0.086s 00:05:05.863 20:18:58 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.863 20:18:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.863 ************************************ 00:05:05.863 END TEST event_reactor_perf 00:05:05.863 ************************************ 00:05:06.122 20:18:58 event -- common/autotest_common.sh@1142 -- # return 0 00:05:06.122 20:18:58 event -- event/event.sh@49 -- # uname -s 00:05:06.122 20:18:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.122 20:18:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.122 20:18:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.122 20:18:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.122 20:18:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.122 ************************************ 00:05:06.122 START TEST event_scheduler 00:05:06.122 ************************************ 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.122 * Looking for test storage... 00:05:06.122 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:06.122 20:18:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.122 20:18:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=302875 00:05:06.122 20:18:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.122 20:18:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 302875 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 302875 ']' 00:05:06.122 20:18:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.122 20:18:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.122 [2024-07-15 20:18:58.431945] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:06.122 [2024-07-15 20:18:58.432003] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302875 ] 00:05:06.122 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.122 [2024-07-15 20:18:58.495292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.380 [2024-07-15 20:18:58.578045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.380 [2024-07-15 20:18:58.578128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.380 [2024-07-15 20:18:58.578223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.380 [2024-07-15 20:18:58.578224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:06.947 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.947 [2024-07-15 20:18:59.284696] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:06.947 [2024-07-15 20:18:59.284718] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.947 [2024-07-15 20:18:59.284728] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.947 [2024-07-15 20:18:59.284739] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.947 [2024-07-15 20:18:59.284746] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.947 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.947 20:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 [2024-07-15 20:18:59.356152] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.207 20:18:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.207 20:18:59 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.207 20:18:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 ************************************ 00:05:07.207 START TEST scheduler_create_thread 00:05:07.207 ************************************ 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 2 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 3 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 4 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 5 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 6 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 7 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 8 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 9 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 10 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.207 20:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.585 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.585 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.585 20:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.585 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.585 20:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.961 20:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.961 00:05:09.961 real 0m2.619s 00:05:09.961 user 0m0.018s 00:05:09.961 sys 0m0.012s 00:05:09.961 20:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.961 20:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.961 ************************************ 00:05:09.961 END TEST scheduler_create_thread 00:05:09.961 ************************************ 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:09.961 20:19:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:09.961 20:19:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 302875 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 302875 ']' 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 302875 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 302875 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:09.961 20:19:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 302875' 00:05:09.962 killing process with pid 302875 00:05:09.962 20:19:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 302875 00:05:09.962 20:19:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 302875 00:05:10.220 [2024-07-15 20:19:02.494397] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.479 00:05:10.479 real 0m4.368s 00:05:10.479 user 0m8.302s 00:05:10.479 sys 0m0.435s 00:05:10.479 20:19:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.479 20:19:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.479 ************************************ 00:05:10.479 END TEST event_scheduler 00:05:10.479 ************************************ 00:05:10.479 20:19:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.479 20:19:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.479 20:19:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.479 20:19:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.479 20:19:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.479 20:19:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.479 ************************************ 00:05:10.479 START TEST app_repeat 00:05:10.479 ************************************ 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=303724 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 303724' 00:05:10.479 Process app_repeat pid: 303724 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.479 spdk_app_start Round 0 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 303724 /var/tmp/spdk-nbd.sock 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 303724 ']' 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.479 20:19:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.479 20:19:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.479 [2024-07-15 20:19:02.790801] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:10.479 [2024-07-15 20:19:02.790884] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303724 ] 00:05:10.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.738 [2024-07-15 20:19:02.864087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.738 [2024-07-15 20:19:02.943578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.738 [2024-07-15 20:19:02.943580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.303 20:19:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.303 20:19:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.303 20:19:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.562 Malloc0 00:05:11.562 20:19:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.821 Malloc1 00:05:11.821 20:19:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.821 20:19:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.821 /dev/nbd0 00:05:11.821 20:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.821 20:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.821 1+0 records in 00:05:11.821 1+0 records out 00:05:11.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216784 s, 18.9 MB/s 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.821 20:19:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.821 20:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.821 20:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.821 20:19:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.080 /dev/nbd1 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.080 1+0 records in 00:05:12.080 1+0 records out 00:05:12.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227245 s, 18.0 MB/s 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.080 20:19:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.080 20:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.339 { 00:05:12.339 "nbd_device": "/dev/nbd0", 00:05:12.339 "bdev_name": "Malloc0" 00:05:12.339 }, 00:05:12.339 { 00:05:12.339 "nbd_device": "/dev/nbd1", 00:05:12.339 "bdev_name": "Malloc1" 00:05:12.339 } 00:05:12.339 ]' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.339 { 00:05:12.339 "nbd_device": "/dev/nbd0", 00:05:12.339 "bdev_name": "Malloc0" 00:05:12.339 }, 00:05:12.339 { 00:05:12.339 "nbd_device": "/dev/nbd1", 00:05:12.339 "bdev_name": "Malloc1" 00:05:12.339 } 00:05:12.339 ]' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.339 /dev/nbd1' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.339 /dev/nbd1' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.339 256+0 records in 00:05:12.339 256+0 records out 00:05:12.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113548 s, 92.3 MB/s 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.339 256+0 records in 00:05:12.339 256+0 records out 00:05:12.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205743 s, 51.0 MB/s 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.339 256+0 records in 00:05:12.339 256+0 records out 00:05:12.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021736 s, 48.2 MB/s 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.339 20:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.340 20:19:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.598 20:19:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.857 20:19:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.116 20:19:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.116 20:19:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.116 20:19:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.375 [2024-07-15 20:19:05.675902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.375 [2024-07-15 20:19:05.742639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.375 [2024-07-15 20:19:05.742641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.634 [2024-07-15 20:19:05.783842] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.634 [2024-07-15 20:19:05.783896] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.165 20:19:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.165 20:19:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.165 spdk_app_start Round 1 00:05:16.165 20:19:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 303724 /var/tmp/spdk-nbd.sock 00:05:16.165 20:19:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 303724 ']' 00:05:16.165 20:19:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.165 20:19:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.165 20:19:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.165 20:19:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.165 20:19:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.424 20:19:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.424 20:19:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.424 20:19:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.682 Malloc0 00:05:16.682 20:19:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.682 Malloc1 00:05:16.682 20:19:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.682 20:19:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.940 /dev/nbd0 00:05:16.940 20:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.940 20:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.940 1+0 records in 00:05:16.940 1+0 records out 00:05:16.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213539 s, 19.2 MB/s 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.940 20:19:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:16.940 20:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.940 20:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.940 20:19:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.198 /dev/nbd1 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.198 1+0 records in 00:05:17.198 1+0 records out 00:05:17.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289941 s, 14.1 MB/s 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.198 20:19:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.198 20:19:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.457 20:19:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.457 { 00:05:17.457 "nbd_device": "/dev/nbd0", 00:05:17.457 "bdev_name": "Malloc0" 00:05:17.457 }, 00:05:17.457 { 00:05:17.457 "nbd_device": "/dev/nbd1", 00:05:17.457 "bdev_name": "Malloc1" 00:05:17.457 } 00:05:17.457 ]' 00:05:17.457 20:19:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.457 { 00:05:17.457 "nbd_device": "/dev/nbd0", 00:05:17.457 "bdev_name": "Malloc0" 00:05:17.457 }, 00:05:17.457 { 00:05:17.457 "nbd_device": "/dev/nbd1", 00:05:17.457 "bdev_name": "Malloc1" 00:05:17.457 } 00:05:17.457 ]' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.458 /dev/nbd1' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.458 /dev/nbd1' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.458 256+0 records in 00:05:17.458 256+0 records out 00:05:17.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110997 s, 94.5 MB/s 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.458 256+0 records in 00:05:17.458 256+0 records out 00:05:17.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202818 s, 51.7 MB/s 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.458 256+0 records in 00:05:17.458 256+0 records out 00:05:17.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219681 s, 47.7 MB/s 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.458 20:19:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.717 20:19:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.976 20:19:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.236 20:19:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.236 20:19:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.236 20:19:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.495 [2024-07-15 20:19:10.740975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.495 [2024-07-15 20:19:10.806745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.495 [2024-07-15 20:19:10.806747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.495 [2024-07-15 20:19:10.848690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.495 [2024-07-15 20:19:10.848731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.785 20:19:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.785 20:19:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.785 spdk_app_start Round 2 00:05:21.785 20:19:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 303724 /var/tmp/spdk-nbd.sock 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 303724 ']' 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.785 20:19:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:21.785 20:19:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.785 Malloc0 00:05:21.785 20:19:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.785 Malloc1 00:05:21.785 20:19:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.785 20:19:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.044 /dev/nbd0 00:05:22.044 20:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.044 20:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.044 1+0 records in 00:05:22.044 1+0 records out 00:05:22.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270273 s, 15.2 MB/s 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.044 20:19:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.044 20:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.044 20:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.044 20:19:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.303 /dev/nbd1 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.303 1+0 records in 00:05:22.303 1+0 records out 00:05:22.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252978 s, 16.2 MB/s 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.303 20:19:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.303 20:19:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.303 { 00:05:22.303 "nbd_device": "/dev/nbd0", 00:05:22.303 "bdev_name": "Malloc0" 00:05:22.303 }, 00:05:22.303 { 00:05:22.303 "nbd_device": "/dev/nbd1", 00:05:22.303 "bdev_name": "Malloc1" 00:05:22.303 } 00:05:22.303 ]' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.640 { 00:05:22.640 "nbd_device": "/dev/nbd0", 00:05:22.640 "bdev_name": "Malloc0" 00:05:22.640 }, 00:05:22.640 { 00:05:22.640 "nbd_device": "/dev/nbd1", 00:05:22.640 "bdev_name": "Malloc1" 00:05:22.640 } 00:05:22.640 ]' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.640 /dev/nbd1' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.640 /dev/nbd1' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.640 256+0 records in 00:05:22.640 256+0 records out 00:05:22.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114701 s, 91.4 MB/s 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.640 256+0 records in 00:05:22.640 256+0 records out 00:05:22.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207251 s, 50.6 MB/s 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.640 256+0 records in 00:05:22.640 256+0 records out 00:05:22.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219414 s, 47.8 MB/s 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.640 20:19:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.937 20:19:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.199 20:19:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.200 20:19:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.200 20:19:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.459 20:19:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.459 [2024-07-15 20:19:15.804120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.718 [2024-07-15 20:19:15.871121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.718 [2024-07-15 20:19:15.871125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.718 [2024-07-15 20:19:15.912150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.718 [2024-07-15 20:19:15.912192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.247 20:19:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 303724 /var/tmp/spdk-nbd.sock 00:05:26.247 20:19:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 303724 ']' 00:05:26.247 20:19:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.247 20:19:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.247 20:19:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.247 20:19:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.247 20:19:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:26.505 20:19:18 event.app_repeat -- event/event.sh@39 -- # killprocess 303724 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 303724 ']' 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 303724 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.505 20:19:18 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 303724 00:05:26.506 20:19:18 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.506 20:19:18 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.506 20:19:18 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 303724' 00:05:26.506 killing process with pid 303724 00:05:26.506 20:19:18 event.app_repeat -- common/autotest_common.sh@967 -- # kill 303724 00:05:26.506 20:19:18 event.app_repeat -- common/autotest_common.sh@972 -- # wait 303724 00:05:26.764 spdk_app_start is called in Round 0. 00:05:26.764 Shutdown signal received, stop current app iteration 00:05:26.764 Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 reinitialization... 00:05:26.764 spdk_app_start is called in Round 1. 00:05:26.764 Shutdown signal received, stop current app iteration 00:05:26.764 Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 reinitialization... 00:05:26.764 spdk_app_start is called in Round 2. 00:05:26.764 Shutdown signal received, stop current app iteration 00:05:26.764 Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 reinitialization... 00:05:26.764 spdk_app_start is called in Round 3. 00:05:26.764 Shutdown signal received, stop current app iteration 00:05:26.764 20:19:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:26.764 20:19:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:26.764 00:05:26.764 real 0m16.241s 00:05:26.765 user 0m34.433s 00:05:26.765 sys 0m3.100s 00:05:26.765 20:19:19 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.765 20:19:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.765 ************************************ 00:05:26.765 END TEST app_repeat 00:05:26.765 ************************************ 00:05:26.765 20:19:19 event -- common/autotest_common.sh@1142 -- # return 0 00:05:26.765 20:19:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:26.765 20:19:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:26.765 20:19:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.765 20:19:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.765 20:19:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.765 ************************************ 00:05:26.765 START TEST cpu_locks 00:05:26.765 ************************************ 00:05:26.765 20:19:19 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:26.765 * Looking for test storage... 00:05:27.023 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:27.023 20:19:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.023 20:19:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.023 20:19:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.023 20:19:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.023 20:19:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.023 20:19:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.023 20:19:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.023 ************************************ 00:05:27.023 START TEST default_locks 00:05:27.023 ************************************ 00:05:27.023 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=306691 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 306691 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 306691 ']' 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.024 20:19:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.024 [2024-07-15 20:19:19.215261] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:27.024 [2024-07-15 20:19:19.215321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306691 ] 00:05:27.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.024 [2024-07-15 20:19:19.284509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.024 [2024-07-15 20:19:19.358358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.959 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.959 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:27.959 20:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 306691 00:05:27.959 20:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 306691 00:05:27.959 20:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.526 lslocks: write error 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 306691 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 306691 ']' 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 306691 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 306691 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 306691' 00:05:28.526 killing process with pid 306691 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 306691 00:05:28.526 20:19:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 306691 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 306691 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 306691 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 306691 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 306691 ']' 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.785 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (306691) - No such process 00:05:28.785 ERROR: process (pid: 306691) is no longer running 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.785 00:05:28.785 real 0m1.844s 00:05:28.785 user 0m1.941s 00:05:28.785 sys 0m0.707s 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.785 20:19:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.785 ************************************ 00:05:28.785 END TEST default_locks 00:05:28.785 ************************************ 00:05:28.785 20:19:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.785 20:19:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:28.785 20:19:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.785 20:19:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.785 20:19:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.785 ************************************ 00:05:28.785 START TEST default_locks_via_rpc 00:05:28.785 ************************************ 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=307184 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 307184 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 307184 ']' 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.785 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.786 [2024-07-15 20:19:21.138218] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:28.786 [2024-07-15 20:19:21.138300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307184 ] 00:05:29.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.044 [2024-07-15 20:19:21.206516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.044 [2024-07-15 20:19:21.284853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 307184 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 307184 00:05:29.612 20:19:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 307184 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 307184 ']' 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 307184 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 307184 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 307184' 00:05:30.176 killing process with pid 307184 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 307184 00:05:30.176 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 307184 00:05:30.434 00:05:30.434 real 0m1.597s 00:05:30.434 user 0m1.656s 00:05:30.434 sys 0m0.559s 00:05:30.434 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.434 20:19:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.434 ************************************ 00:05:30.434 END TEST default_locks_via_rpc 00:05:30.434 ************************************ 00:05:30.434 20:19:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.434 20:19:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:30.434 20:19:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.434 20:19:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.434 20:19:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.434 ************************************ 00:05:30.434 START TEST non_locking_app_on_locked_coremask 00:05:30.434 ************************************ 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=307482 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 307482 /var/tmp/spdk.sock 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 307482 ']' 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.434 20:19:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.434 [2024-07-15 20:19:22.812949] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:30.434 [2024-07-15 20:19:22.813007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307482 ] 00:05:30.690 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.690 [2024-07-15 20:19:22.881294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.690 [2024-07-15 20:19:22.949712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=307513 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 307513 /var/tmp/spdk2.sock 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 307513 ']' 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.259 20:19:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.259 [2024-07-15 20:19:23.631046] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:31.259 [2024-07-15 20:19:23.631096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307513 ] 00:05:31.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.517 [2024-07-15 20:19:23.724066] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.517 [2024-07-15 20:19:23.724097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.517 [2024-07-15 20:19:23.868222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.080 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.080 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:32.080 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 307482 00:05:32.080 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 307482 00:05:32.080 20:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.452 lslocks: write error 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 307482 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 307482 ']' 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 307482 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 307482 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.452 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 307482' 00:05:33.452 killing process with pid 307482 00:05:33.453 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 307482 00:05:33.453 20:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 307482 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 307513 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 307513 ']' 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 307513 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 307513 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 307513' 00:05:34.016 killing process with pid 307513 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 307513 00:05:34.016 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 307513 00:05:34.272 00:05:34.272 real 0m3.859s 00:05:34.272 user 0m4.068s 00:05:34.272 sys 0m1.260s 00:05:34.272 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.272 20:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 ************************************ 00:05:34.272 END TEST non_locking_app_on_locked_coremask 00:05:34.272 ************************************ 00:05:34.529 20:19:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:34.529 20:19:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:34.529 20:19:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.529 20:19:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.529 20:19:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.529 ************************************ 00:05:34.529 START TEST locking_app_on_unlocked_coremask 00:05:34.529 ************************************ 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=308096 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 308096 /var/tmp/spdk.sock 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 308096 ']' 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.529 20:19:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.529 [2024-07-15 20:19:26.749751] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:34.529 [2024-07-15 20:19:26.749810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308096 ] 00:05:34.529 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.529 [2024-07-15 20:19:26.817418] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.529 [2024-07-15 20:19:26.817452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.529 [2024-07-15 20:19:26.887706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=308330 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 308330 /var/tmp/spdk2.sock 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 308330 ']' 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.460 20:19:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.460 [2024-07-15 20:19:27.589141] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:35.460 [2024-07-15 20:19:27.589203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308330 ] 00:05:35.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.460 [2024-07-15 20:19:27.683574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.460 [2024-07-15 20:19:27.824321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.390 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.390 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:36.390 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 308330 00:05:36.390 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 308330 00:05:36.390 20:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.324 lslocks: write error 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 308096 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 308096 ']' 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 308096 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 308096 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 308096' 00:05:37.582 killing process with pid 308096 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 308096 00:05:37.582 20:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 308096 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 308330 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 308330 ']' 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 308330 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 308330 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 308330' 00:05:38.148 killing process with pid 308330 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 308330 00:05:38.148 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 308330 00:05:38.406 00:05:38.406 real 0m3.996s 00:05:38.406 user 0m4.248s 00:05:38.406 sys 0m1.336s 00:05:38.406 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.406 20:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.406 ************************************ 00:05:38.406 END TEST locking_app_on_unlocked_coremask 00:05:38.406 ************************************ 00:05:38.406 20:19:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.406 20:19:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:38.406 20:19:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.406 20:19:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.406 20:19:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.664 ************************************ 00:05:38.664 START TEST locking_app_on_locked_coremask 00:05:38.664 ************************************ 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=308901 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 308901 /var/tmp/spdk.sock 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 308901 ']' 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.664 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.665 20:19:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.665 [2024-07-15 20:19:30.824584] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:38.665 [2024-07-15 20:19:30.824641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308901 ] 00:05:38.665 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.665 [2024-07-15 20:19:30.891364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.665 [2024-07-15 20:19:30.969900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=309084 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 309084 /var/tmp/spdk2.sock 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 309084 /var/tmp/spdk2.sock 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 309084 /var/tmp/spdk2.sock 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 309084 ']' 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.615 20:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.615 [2024-07-15 20:19:31.652795] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:39.615 [2024-07-15 20:19:31.652859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309084 ] 00:05:39.615 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.615 [2024-07-15 20:19:31.748865] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 308901 has claimed it. 00:05:39.615 [2024-07-15 20:19:31.748901] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.181 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (309084) - No such process 00:05:40.181 ERROR: process (pid: 309084) is no longer running 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 308901 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 308901 00:05:40.181 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.440 lslocks: write error 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 308901 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 308901 ']' 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 308901 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 308901 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 308901' 00:05:40.440 killing process with pid 308901 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 308901 00:05:40.440 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 308901 00:05:40.698 00:05:40.698 real 0m2.143s 00:05:40.698 user 0m2.330s 00:05:40.698 sys 0m0.600s 00:05:40.698 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.698 20:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.698 ************************************ 00:05:40.698 END TEST locking_app_on_locked_coremask 00:05:40.698 ************************************ 00:05:40.698 20:19:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.698 20:19:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:40.698 20:19:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.698 20:19:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.698 20:19:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.698 ************************************ 00:05:40.698 START TEST locking_overlapped_coremask 00:05:40.698 ************************************ 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=309307 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 309307 /var/tmp/spdk.sock 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 309307 ']' 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.698 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.698 [2024-07-15 20:19:33.052901] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:40.698 [2024-07-15 20:19:33.052978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309307 ] 00:05:40.957 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.957 [2024-07-15 20:19:33.123794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.957 [2024-07-15 20:19:33.204213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.957 [2024-07-15 20:19:33.204308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.957 [2024-07-15 20:19:33.204311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=309476 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 309476 /var/tmp/spdk2.sock 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 309476 /var/tmp/spdk2.sock 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 309476 /var/tmp/spdk2.sock 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 309476 ']' 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.523 20:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.782 [2024-07-15 20:19:33.905464] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:41.782 [2024-07-15 20:19:33.905553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309476 ] 00:05:41.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.782 [2024-07-15 20:19:33.998686] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 309307 has claimed it. 00:05:41.782 [2024-07-15 20:19:33.998721] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.353 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (309476) - No such process 00:05:42.353 ERROR: process (pid: 309476) is no longer running 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 309307 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 309307 ']' 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 309307 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 309307 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 309307' 00:05:42.353 killing process with pid 309307 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 309307 00:05:42.353 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 309307 00:05:42.612 00:05:42.613 real 0m1.878s 00:05:42.613 user 0m5.261s 00:05:42.613 sys 0m0.464s 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.613 ************************************ 00:05:42.613 END TEST locking_overlapped_coremask 00:05:42.613 ************************************ 00:05:42.613 20:19:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:42.613 20:19:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:42.613 20:19:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.613 20:19:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.613 20:19:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.613 ************************************ 00:05:42.613 START TEST locking_overlapped_coremask_via_rpc 00:05:42.613 ************************************ 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=309768 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 309768 /var/tmp/spdk.sock 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 309768 ']' 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.613 20:19:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.872 [2024-07-15 20:19:35.011853] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:42.872 [2024-07-15 20:19:35.011915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309768 ] 00:05:42.872 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.872 [2024-07-15 20:19:35.078959] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.872 [2024-07-15 20:19:35.078991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.872 [2024-07-15 20:19:35.148351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.872 [2024-07-15 20:19:35.148475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.872 [2024-07-15 20:19:35.148477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=309788 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 309788 /var/tmp/spdk2.sock 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 309788 ']' 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.807 20:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.807 [2024-07-15 20:19:35.840109] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:43.807 [2024-07-15 20:19:35.840163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309788 ] 00:05:43.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.807 [2024-07-15 20:19:35.933207] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.807 [2024-07-15 20:19:35.933239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.807 [2024-07-15 20:19:36.079225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.807 [2024-07-15 20:19:36.082491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.807 [2024-07-15 20:19:36.082492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.372 [2024-07-15 20:19:36.710506] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 309768 has claimed it. 00:05:44.372 request: 00:05:44.372 { 00:05:44.372 "method": "framework_enable_cpumask_locks", 00:05:44.372 "req_id": 1 00:05:44.372 } 00:05:44.372 Got JSON-RPC error response 00:05:44.372 response: 00:05:44.372 { 00:05:44.372 "code": -32603, 00:05:44.372 "message": "Failed to claim CPU core: 2" 00:05:44.372 } 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 309768 /var/tmp/spdk.sock 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 309768 ']' 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.372 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 309788 /var/tmp/spdk2.sock 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 309788 ']' 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.630 20:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.888 00:05:44.888 real 0m2.100s 00:05:44.888 user 0m0.820s 00:05:44.888 sys 0m0.203s 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.888 20:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.888 ************************************ 00:05:44.888 END TEST locking_overlapped_coremask_via_rpc 00:05:44.888 ************************************ 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.888 20:19:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:44.888 20:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 309768 ]] 00:05:44.888 20:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 309768 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 309768 ']' 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 309768 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 309768 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 309768' 00:05:44.888 killing process with pid 309768 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 309768 00:05:44.888 20:19:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 309768 00:05:45.146 20:19:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 309788 ]] 00:05:45.146 20:19:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 309788 00:05:45.146 20:19:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 309788 ']' 00:05:45.146 20:19:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 309788 00:05:45.146 20:19:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:45.146 20:19:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.146 20:19:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 309788 00:05:45.403 20:19:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:45.403 20:19:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:45.403 20:19:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 309788' 00:05:45.403 killing process with pid 309788 00:05:45.403 20:19:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 309788 00:05:45.403 20:19:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 309788 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 309768 ]] 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 309768 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 309768 ']' 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 309768 00:05:45.726 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (309768) - No such process 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 309768 is not found' 00:05:45.726 Process with pid 309768 is not found 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 309788 ]] 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 309788 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 309788 ']' 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 309788 00:05:45.726 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (309788) - No such process 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 309788 is not found' 00:05:45.726 Process with pid 309788 is not found 00:05:45.726 20:19:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:45.726 00:05:45.726 real 0m18.795s 00:05:45.726 user 0m31.020s 00:05:45.726 sys 0m6.130s 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.726 20:19:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.726 ************************************ 00:05:45.726 END TEST cpu_locks 00:05:45.726 ************************************ 00:05:45.726 20:19:37 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.726 00:05:45.726 real 0m43.672s 00:05:45.726 user 1m20.352s 00:05:45.726 sys 0m10.369s 00:05:45.726 20:19:37 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.726 20:19:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.726 ************************************ 00:05:45.726 END TEST event 00:05:45.726 ************************************ 00:05:45.726 20:19:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.726 20:19:37 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:45.726 20:19:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.726 20:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.726 20:19:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.726 ************************************ 00:05:45.726 START TEST thread 00:05:45.726 ************************************ 00:05:45.726 20:19:37 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:45.726 * Looking for test storage... 00:05:45.726 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:45.726 20:19:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.726 20:19:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:45.726 20:19:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.726 20:19:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.984 ************************************ 00:05:45.984 START TEST thread_poller_perf 00:05:45.984 ************************************ 00:05:45.984 20:19:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.984 [2024-07-15 20:19:38.131471] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:45.984 [2024-07-15 20:19:38.131580] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310409 ] 00:05:45.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.984 [2024-07-15 20:19:38.203123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.984 [2024-07-15 20:19:38.277718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.984 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.362 ====================================== 00:05:47.362 busy:2504426388 (cyc) 00:05:47.362 total_run_count: 857000 00:05:47.362 tsc_hz: 2500000000 (cyc) 00:05:47.362 ====================================== 00:05:47.362 poller_cost: 2922 (cyc), 1168 (nsec) 00:05:47.362 00:05:47.362 real 0m1.234s 00:05:47.362 user 0m1.146s 00:05:47.362 sys 0m0.084s 00:05:47.362 20:19:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.362 20:19:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.362 ************************************ 00:05:47.362 END TEST thread_poller_perf 00:05:47.362 ************************************ 00:05:47.362 20:19:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:47.362 20:19:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.362 20:19:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:47.362 20:19:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.362 20:19:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.362 ************************************ 00:05:47.362 START TEST thread_poller_perf 00:05:47.362 ************************************ 00:05:47.362 20:19:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.362 [2024-07-15 20:19:39.450407] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:47.362 [2024-07-15 20:19:39.450496] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310668 ] 00:05:47.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.362 [2024-07-15 20:19:39.521598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.362 [2024-07-15 20:19:39.593170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.362 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:48.300 ====================================== 00:05:48.300 busy:2501379432 (cyc) 00:05:48.300 total_run_count: 14040000 00:05:48.300 tsc_hz: 2500000000 (cyc) 00:05:48.300 ====================================== 00:05:48.300 poller_cost: 178 (cyc), 71 (nsec) 00:05:48.300 00:05:48.300 real 0m1.229s 00:05:48.300 user 0m1.131s 00:05:48.300 sys 0m0.094s 00:05:48.300 20:19:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.300 20:19:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.300 ************************************ 00:05:48.300 END TEST thread_poller_perf 00:05:48.300 ************************************ 00:05:48.560 20:19:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:48.560 20:19:40 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:48.560 20:19:40 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:48.560 20:19:40 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.560 20:19:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.560 20:19:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.560 ************************************ 00:05:48.560 START TEST thread_spdk_lock 00:05:48.560 ************************************ 00:05:48.560 20:19:40 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:48.560 [2024-07-15 20:19:40.756095] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:48.560 [2024-07-15 20:19:40.756154] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310844 ] 00:05:48.560 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.560 [2024-07-15 20:19:40.820206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.560 [2024-07-15 20:19:40.897939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.560 [2024-07-15 20:19:40.897942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.129 [2024-07-15 20:19:41.389511] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:49.129 [2024-07-15 20:19:41.389548] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:49.129 [2024-07-15 20:19:41.389558] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14ce200 00:05:49.129 [2024-07-15 20:19:41.390452] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:49.129 [2024-07-15 20:19:41.390556] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:49.129 [2024-07-15 20:19:41.390574] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:49.129 Starting test contend 00:05:49.129 Worker Delay Wait us Hold us Total us 00:05:49.129 0 3 175092 186291 361383 00:05:49.129 1 5 89812 286043 375856 00:05:49.129 PASS test contend 00:05:49.129 Starting test hold_by_poller 00:05:49.129 PASS test hold_by_poller 00:05:49.129 Starting test hold_by_message 00:05:49.129 PASS test hold_by_message 00:05:49.129 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:49.129 100014 assertions passed 00:05:49.129 0 assertions failed 00:05:49.129 00:05:49.129 real 0m0.704s 00:05:49.129 user 0m1.114s 00:05:49.129 sys 0m0.080s 00:05:49.129 20:19:41 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.129 20:19:41 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:49.129 ************************************ 00:05:49.129 END TEST thread_spdk_lock 00:05:49.129 ************************************ 00:05:49.129 20:19:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:49.129 00:05:49.129 real 0m3.511s 00:05:49.129 user 0m3.527s 00:05:49.129 sys 0m0.495s 00:05:49.129 20:19:41 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.129 20:19:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.129 ************************************ 00:05:49.129 END TEST thread 00:05:49.129 ************************************ 00:05:49.388 20:19:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.388 20:19:41 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:49.388 20:19:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.388 20:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.388 20:19:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.388 ************************************ 00:05:49.388 START TEST accel 00:05:49.388 ************************************ 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:49.388 * Looking for test storage... 00:05:49.388 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:49.388 20:19:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:49.388 20:19:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:49.388 20:19:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.388 20:19:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=311051 00:05:49.388 20:19:41 accel -- accel/accel.sh@63 -- # waitforlisten 311051 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@829 -- # '[' -z 311051 ']' 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.388 20:19:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:49.388 20:19:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.388 20:19:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.388 20:19:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.388 20:19:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.388 20:19:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.388 20:19:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.388 20:19:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.388 20:19:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:49.388 20:19:41 accel -- accel/accel.sh@41 -- # jq -r . 00:05:49.388 [2024-07-15 20:19:41.711823] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:49.389 [2024-07-15 20:19:41.711899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311051 ] 00:05:49.389 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.648 [2024-07-15 20:19:41.782604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.648 [2024-07-15 20:19:41.854683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@862 -- # return 0 00:05:50.217 20:19:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:50.217 20:19:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:50.217 20:19:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:50.217 20:19:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:50.217 20:19:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:50.217 20:19:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:50.217 20:19:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.217 20:19:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.217 20:19:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.217 20:19:42 accel -- accel/accel.sh@75 -- # killprocess 311051 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@948 -- # '[' -z 311051 ']' 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@952 -- # kill -0 311051 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@953 -- # uname 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.217 20:19:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 311051 00:05:50.476 20:19:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.476 20:19:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.476 20:19:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 311051' 00:05:50.476 killing process with pid 311051 00:05:50.476 20:19:42 accel -- common/autotest_common.sh@967 -- # kill 311051 00:05:50.476 20:19:42 accel -- common/autotest_common.sh@972 -- # wait 311051 00:05:50.735 20:19:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:50.735 20:19:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:50.735 20:19:42 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:50.735 20:19:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.735 20:19:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.735 20:19:42 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:50.735 20:19:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:50.735 20:19:43 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.735 20:19:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:50.735 20:19:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.735 20:19:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:50.735 20:19:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.735 20:19:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.735 20:19:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.735 ************************************ 00:05:50.735 START TEST accel_missing_filename 00:05:50.735 ************************************ 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.736 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:50.736 20:19:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:50.736 [2024-07-15 20:19:43.111736] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:50.736 [2024-07-15 20:19:43.111816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311355 ] 00:05:50.994 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.994 [2024-07-15 20:19:43.183658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.994 [2024-07-15 20:19:43.256217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.994 [2024-07-15 20:19:43.296180] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.994 [2024-07-15 20:19:43.355800] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:51.253 A filename is required. 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.253 00:05:51.253 real 0m0.336s 00:05:51.253 user 0m0.227s 00:05:51.253 sys 0m0.146s 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.253 20:19:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:51.253 ************************************ 00:05:51.253 END TEST accel_missing_filename 00:05:51.253 ************************************ 00:05:51.253 20:19:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.253 20:19:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.253 20:19:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:51.253 20:19:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.253 20:19:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.253 ************************************ 00:05:51.253 START TEST accel_compress_verify 00:05:51.253 ************************************ 00:05:51.253 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.253 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:51.253 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.253 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.253 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.254 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.254 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.254 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:51.254 20:19:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:51.254 [2024-07-15 20:19:43.529138] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:51.254 [2024-07-15 20:19:43.529219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311382 ] 00:05:51.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.254 [2024-07-15 20:19:43.601581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.513 [2024-07-15 20:19:43.672862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.513 [2024-07-15 20:19:43.712776] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.513 [2024-07-15 20:19:43.773025] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:51.513 00:05:51.513 Compression does not support the verify option, aborting. 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.513 00:05:51.513 real 0m0.337s 00:05:51.513 user 0m0.236s 00:05:51.513 sys 0m0.141s 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.513 20:19:43 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:51.513 ************************************ 00:05:51.513 END TEST accel_compress_verify 00:05:51.513 ************************************ 00:05:51.513 20:19:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.513 20:19:43 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:51.513 20:19:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.513 20:19:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.513 20:19:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.772 ************************************ 00:05:51.772 START TEST accel_wrong_workload 00:05:51.772 ************************************ 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:51.772 20:19:43 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:51.772 Unsupported workload type: foobar 00:05:51.772 [2024-07-15 20:19:43.943997] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:51.772 accel_perf options: 00:05:51.772 [-h help message] 00:05:51.772 [-q queue depth per core] 00:05:51.772 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:51.772 [-T number of threads per core 00:05:51.772 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:51.772 [-t time in seconds] 00:05:51.772 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:51.772 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:51.772 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:51.772 [-l for compress/decompress workloads, name of uncompressed input file 00:05:51.772 [-S for crc32c workload, use this seed value (default 0) 00:05:51.772 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:51.772 [-f for fill workload, use this BYTE value (default 255) 00:05:51.772 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:51.772 [-y verify result if this switch is on] 00:05:51.772 [-a tasks to allocate per core (default: same value as -q)] 00:05:51.772 Can be used to spread operations across a wider range of memory. 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.772 00:05:51.772 real 0m0.028s 00:05:51.772 user 0m0.012s 00:05:51.772 sys 0m0.016s 00:05:51.772 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.773 20:19:43 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:51.773 ************************************ 00:05:51.773 END TEST accel_wrong_workload 00:05:51.773 ************************************ 00:05:51.773 Error: writing output failed: Broken pipe 00:05:51.773 20:19:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.773 20:19:43 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:51.773 20:19:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:51.773 20:19:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.773 20:19:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.773 ************************************ 00:05:51.773 START TEST accel_negative_buffers 00:05:51.773 ************************************ 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:51.773 20:19:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:51.773 -x option must be non-negative. 00:05:51.773 [2024-07-15 20:19:44.042077] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:51.773 accel_perf options: 00:05:51.773 [-h help message] 00:05:51.773 [-q queue depth per core] 00:05:51.773 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:51.773 [-T number of threads per core 00:05:51.773 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:51.773 [-t time in seconds] 00:05:51.773 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:51.773 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:51.773 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:51.773 [-l for compress/decompress workloads, name of uncompressed input file 00:05:51.773 [-S for crc32c workload, use this seed value (default 0) 00:05:51.773 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:51.773 [-f for fill workload, use this BYTE value (default 255) 00:05:51.773 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:51.773 [-y verify result if this switch is on] 00:05:51.773 [-a tasks to allocate per core (default: same value as -q)] 00:05:51.773 Can be used to spread operations across a wider range of memory. 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.773 00:05:51.773 real 0m0.029s 00:05:51.773 user 0m0.013s 00:05:51.773 sys 0m0.017s 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.773 20:19:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:51.773 ************************************ 00:05:51.773 END TEST accel_negative_buffers 00:05:51.773 ************************************ 00:05:51.773 Error: writing output failed: Broken pipe 00:05:51.773 20:19:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.773 20:19:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:51.773 20:19:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.773 20:19:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.773 20:19:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.773 ************************************ 00:05:51.773 START TEST accel_crc32c 00:05:51.773 ************************************ 00:05:51.773 20:19:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:51.773 20:19:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:51.773 [2024-07-15 20:19:44.143648] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:51.773 [2024-07-15 20:19:44.143724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311696 ] 00:05:52.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.033 [2024-07-15 20:19:44.212131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.033 [2024-07-15 20:19:44.283925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.033 20:19:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:53.413 20:19:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.413 00:05:53.413 real 0m1.333s 00:05:53.413 user 0m1.225s 00:05:53.413 sys 0m0.122s 00:05:53.413 20:19:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.413 20:19:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:53.413 ************************************ 00:05:53.413 END TEST accel_crc32c 00:05:53.413 ************************************ 00:05:53.413 20:19:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.413 20:19:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:53.413 20:19:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.413 20:19:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.413 20:19:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.413 ************************************ 00:05:53.413 START TEST accel_crc32c_C2 00:05:53.413 ************************************ 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:53.413 [2024-07-15 20:19:45.565723] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:53.413 [2024-07-15 20:19:45.565811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311934 ] 00:05:53.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.413 [2024-07-15 20:19:45.635795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.413 [2024-07-15 20:19:45.707586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.413 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 20:19:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.792 00:05:54.792 real 0m1.339s 00:05:54.792 user 0m1.215s 00:05:54.792 sys 0m0.138s 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.792 20:19:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 ************************************ 00:05:54.792 END TEST accel_crc32c_C2 00:05:54.792 ************************************ 00:05:54.792 20:19:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.792 20:19:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:54.792 20:19:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:54.792 20:19:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.792 20:19:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 ************************************ 00:05:54.792 START TEST accel_copy 00:05:54.792 ************************************ 00:05:54.792 20:19:46 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:54.792 20:19:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:54.792 [2024-07-15 20:19:46.989042] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:54.792 [2024-07-15 20:19:46.989122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312159 ] 00:05:54.792 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.792 [2024-07-15 20:19:47.059690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.792 [2024-07-15 20:19:47.131228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.792 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.051 20:19:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:55.987 20:19:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.987 00:05:55.987 real 0m1.338s 00:05:55.987 user 0m1.213s 00:05:55.987 sys 0m0.139s 00:05:55.987 20:19:48 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.987 20:19:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:55.987 ************************************ 00:05:55.987 END TEST accel_copy 00:05:55.987 ************************************ 00:05:55.987 20:19:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.987 20:19:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.987 20:19:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:55.987 20:19:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.987 20:19:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.247 ************************************ 00:05:56.247 START TEST accel_fill 00:05:56.247 ************************************ 00:05:56.247 20:19:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:56.247 [2024-07-15 20:19:48.412864] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:56.247 [2024-07-15 20:19:48.412947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312390 ] 00:05:56.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.247 [2024-07-15 20:19:48.485021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.247 [2024-07-15 20:19:48.557796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:56.247 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.248 20:19:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:57.626 20:19:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.626 00:05:57.626 real 0m1.340s 00:05:57.626 user 0m1.223s 00:05:57.626 sys 0m0.131s 00:05:57.627 20:19:49 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.627 20:19:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:57.627 ************************************ 00:05:57.627 END TEST accel_fill 00:05:57.627 ************************************ 00:05:57.627 20:19:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.627 20:19:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:57.627 20:19:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:57.627 20:19:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.627 20:19:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.627 ************************************ 00:05:57.627 START TEST accel_copy_crc32c 00:05:57.627 ************************************ 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:57.627 20:19:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:57.627 [2024-07-15 20:19:49.838579] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:57.627 [2024-07-15 20:19:49.838678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312613 ] 00:05:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.627 [2024-07-15 20:19:49.910168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.627 [2024-07-15 20:19:49.984281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:57.887 20:19:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.826 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.826 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.826 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.826 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.827 00:05:58.827 real 0m1.342s 00:05:58.827 user 0m1.223s 00:05:58.827 sys 0m0.133s 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.827 20:19:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:58.827 ************************************ 00:05:58.827 END TEST accel_copy_crc32c 00:05:58.827 ************************************ 00:05:58.827 20:19:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.827 20:19:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:58.827 20:19:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:58.827 20:19:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.827 20:19:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.087 ************************************ 00:05:59.087 START TEST accel_copy_crc32c_C2 00:05:59.087 ************************************ 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:59.087 [2024-07-15 20:19:51.258116] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:59.087 [2024-07-15 20:19:51.258206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312874 ] 00:05:59.087 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.087 [2024-07-15 20:19:51.328910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.087 [2024-07-15 20:19:51.401299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.087 20:19:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.466 00:06:00.466 real 0m1.339s 00:06:00.466 user 0m1.222s 00:06:00.466 sys 0m0.130s 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.466 20:19:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:00.466 ************************************ 00:06:00.466 END TEST accel_copy_crc32c_C2 00:06:00.466 ************************************ 00:06:00.466 20:19:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.466 20:19:52 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:00.466 20:19:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.466 20:19:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.466 20:19:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.466 ************************************ 00:06:00.466 START TEST accel_dualcast 00:06:00.466 ************************************ 00:06:00.466 20:19:52 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:00.466 20:19:52 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:00.466 [2024-07-15 20:19:52.681093] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:00.466 [2024-07-15 20:19:52.681177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313155 ] 00:06:00.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.466 [2024-07-15 20:19:52.751920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.466 [2024-07-15 20:19:52.823321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.725 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:00.726 20:19:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:01.667 20:19:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.667 00:06:01.667 real 0m1.338s 00:06:01.667 user 0m1.219s 00:06:01.667 sys 0m0.132s 00:06:01.667 20:19:53 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.667 20:19:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:01.667 ************************************ 00:06:01.667 END TEST accel_dualcast 00:06:01.667 ************************************ 00:06:01.667 20:19:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.667 20:19:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:01.667 20:19:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.667 20:19:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.667 20:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.925 ************************************ 00:06:01.925 START TEST accel_compare 00:06:01.925 ************************************ 00:06:01.925 20:19:54 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:01.925 [2024-07-15 20:19:54.104417] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:01.925 [2024-07-15 20:19:54.104511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313442 ] 00:06:01.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.925 [2024-07-15 20:19:54.176245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.925 [2024-07-15 20:19:54.249612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.925 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.926 20:19:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.299 20:19:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:03.300 20:19:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.300 00:06:03.300 real 0m1.342s 00:06:03.300 user 0m1.227s 00:06:03.300 sys 0m0.128s 00:06:03.300 20:19:55 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.300 20:19:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:03.300 ************************************ 00:06:03.300 END TEST accel_compare 00:06:03.300 ************************************ 00:06:03.300 20:19:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.300 20:19:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:03.300 20:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.300 20:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.300 20:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.300 ************************************ 00:06:03.300 START TEST accel_xor 00:06:03.300 ************************************ 00:06:03.300 20:19:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:03.300 20:19:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:03.300 [2024-07-15 20:19:55.526508] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:03.300 [2024-07-15 20:19:55.526592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313726 ] 00:06:03.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.300 [2024-07-15 20:19:55.597590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.300 [2024-07-15 20:19:55.668894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.558 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:03.559 20:19:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.494 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:04.495 20:19:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.495 00:06:04.495 real 0m1.339s 00:06:04.495 user 0m1.218s 00:06:04.495 sys 0m0.136s 00:06:04.495 20:19:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.495 20:19:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:04.495 ************************************ 00:06:04.495 END TEST accel_xor 00:06:04.495 ************************************ 00:06:04.754 20:19:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.754 20:19:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:04.754 20:19:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:04.754 20:19:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.754 20:19:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.754 ************************************ 00:06:04.754 START TEST accel_xor 00:06:04.754 ************************************ 00:06:04.754 20:19:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:04.754 20:19:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:04.754 [2024-07-15 20:19:56.947827] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:04.754 [2024-07-15 20:19:56.947911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314005 ] 00:06:04.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.754 [2024-07-15 20:19:57.017835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.754 [2024-07-15 20:19:57.094995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.754 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:05.012 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.013 20:19:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:05.950 20:19:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.950 00:06:05.950 real 0m1.343s 00:06:05.950 user 0m1.224s 00:06:05.950 sys 0m0.133s 00:06:05.950 20:19:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.950 20:19:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:05.950 ************************************ 00:06:05.950 END TEST accel_xor 00:06:05.950 ************************************ 00:06:05.950 20:19:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.950 20:19:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:05.950 20:19:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.950 20:19:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.950 20:19:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.210 ************************************ 00:06:06.210 START TEST accel_dif_verify 00:06:06.210 ************************************ 00:06:06.210 20:19:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:06.210 [2024-07-15 20:19:58.377251] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:06.210 [2024-07-15 20:19:58.377340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314290 ] 00:06:06.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.210 [2024-07-15 20:19:58.449640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.210 [2024-07-15 20:19:58.520007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.210 20:19:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.588 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:07.589 20:19:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.589 00:06:07.589 real 0m1.340s 00:06:07.589 user 0m1.222s 00:06:07.589 sys 0m0.133s 00:06:07.589 20:19:59 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.589 20:19:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:07.589 ************************************ 00:06:07.589 END TEST accel_dif_verify 00:06:07.589 ************************************ 00:06:07.589 20:19:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.589 20:19:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:07.589 20:19:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:07.589 20:19:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.589 20:19:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.589 ************************************ 00:06:07.589 START TEST accel_dif_generate 00:06:07.589 ************************************ 00:06:07.589 20:19:59 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:07.589 20:19:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:07.589 [2024-07-15 20:19:59.798294] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:07.589 [2024-07-15 20:19:59.798376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314575 ] 00:06:07.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.589 [2024-07-15 20:19:59.868914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.589 [2024-07-15 20:19:59.939226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:07.849 20:19:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.786 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:08.787 20:20:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.787 00:06:08.787 real 0m1.337s 00:06:08.787 user 0m1.219s 00:06:08.787 sys 0m0.133s 00:06:08.787 20:20:01 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.787 20:20:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:08.787 ************************************ 00:06:08.787 END TEST accel_dif_generate 00:06:08.787 ************************************ 00:06:08.787 20:20:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.787 20:20:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:08.787 20:20:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:08.787 20:20:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.787 20:20:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.046 ************************************ 00:06:09.046 START TEST accel_dif_generate_copy 00:06:09.046 ************************************ 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:09.046 [2024-07-15 20:20:01.216794] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:09.046 [2024-07-15 20:20:01.216877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314866 ] 00:06:09.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.046 [2024-07-15 20:20:01.286187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.046 [2024-07-15 20:20:01.355605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.046 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.047 20:20:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.550 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.551 00:06:10.551 real 0m1.336s 00:06:10.551 user 0m1.221s 00:06:10.551 sys 0m0.130s 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.551 20:20:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:10.551 ************************************ 00:06:10.551 END TEST accel_dif_generate_copy 00:06:10.551 ************************************ 00:06:10.551 20:20:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.551 20:20:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:10.551 20:20:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:10.551 20:20:02 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:10.551 20:20:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.551 20:20:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.551 ************************************ 00:06:10.551 START TEST accel_comp 00:06:10.551 ************************************ 00:06:10.551 20:20:02 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:10.551 [2024-07-15 20:20:02.632888] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:10.551 [2024-07-15 20:20:02.632971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315150 ] 00:06:10.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.551 [2024-07-15 20:20:02.702995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.551 [2024-07-15 20:20:02.772718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.551 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.552 20:20:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:11.930 20:20:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.930 00:06:11.930 real 0m1.339s 00:06:11.930 user 0m1.216s 00:06:11.930 sys 0m0.138s 00:06:11.930 20:20:03 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.930 20:20:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:11.930 ************************************ 00:06:11.930 END TEST accel_comp 00:06:11.930 ************************************ 00:06:11.930 20:20:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.930 20:20:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:11.930 20:20:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:11.930 20:20:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.930 20:20:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.930 ************************************ 00:06:11.930 START TEST accel_decomp 00:06:11.930 ************************************ 00:06:11.930 20:20:04 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:11.930 20:20:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:11.930 20:20:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:11.931 [2024-07-15 20:20:04.048186] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:11.931 [2024-07-15 20:20:04.048245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315392 ] 00:06:11.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.931 [2024-07-15 20:20:04.111375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.931 [2024-07-15 20:20:04.184503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:11.931 20:20:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.311 20:20:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.311 00:06:13.311 real 0m1.326s 00:06:13.311 user 0m1.221s 00:06:13.311 sys 0m0.121s 00:06:13.311 20:20:05 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.311 20:20:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:13.311 ************************************ 00:06:13.311 END TEST accel_decomp 00:06:13.311 ************************************ 00:06:13.311 20:20:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.311 20:20:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.311 20:20:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:13.311 20:20:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.311 20:20:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.311 ************************************ 00:06:13.311 START TEST accel_decomp_full 00:06:13.311 ************************************ 00:06:13.311 20:20:05 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:13.311 [2024-07-15 20:20:05.469986] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:13.311 [2024-07-15 20:20:05.470066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315608 ] 00:06:13.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.311 [2024-07-15 20:20:05.542082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.311 [2024-07-15 20:20:05.614391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.311 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.312 20:20:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.690 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.691 20:20:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.691 00:06:14.691 real 0m1.349s 00:06:14.691 user 0m1.230s 00:06:14.691 sys 0m0.134s 00:06:14.691 20:20:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.691 20:20:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:14.691 ************************************ 00:06:14.691 END TEST accel_decomp_full 00:06:14.691 ************************************ 00:06:14.691 20:20:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.691 20:20:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:14.691 20:20:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:14.691 20:20:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.691 20:20:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.691 ************************************ 00:06:14.691 START TEST accel_decomp_mcore 00:06:14.691 ************************************ 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:14.691 20:20:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:14.691 [2024-07-15 20:20:06.901031] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:14.691 [2024-07-15 20:20:06.901120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315841 ] 00:06:14.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.691 [2024-07-15 20:20:06.971008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.691 [2024-07-15 20:20:07.046702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.691 [2024-07-15 20:20:07.046798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.691 [2024-07-15 20:20:07.046882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.691 [2024-07-15 20:20:07.046884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.951 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.952 20:20:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.890 00:06:15.890 real 0m1.355s 00:06:15.890 user 0m4.558s 00:06:15.890 sys 0m0.143s 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.890 20:20:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:15.890 ************************************ 00:06:15.890 END TEST accel_decomp_mcore 00:06:15.890 ************************************ 00:06:16.150 20:20:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.150 20:20:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.150 20:20:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:16.150 20:20:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.150 20:20:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.150 ************************************ 00:06:16.150 START TEST accel_decomp_full_mcore 00:06:16.150 ************************************ 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:16.150 [2024-07-15 20:20:08.341970] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:16.150 [2024-07-15 20:20:08.342049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316072 ] 00:06:16.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.150 [2024-07-15 20:20:08.413046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.150 [2024-07-15 20:20:08.487779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.150 [2024-07-15 20:20:08.487877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.150 [2024-07-15 20:20:08.487974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.150 [2024-07-15 20:20:08.487976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.150 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.410 20:20:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.348 00:06:17.348 real 0m1.365s 00:06:17.348 user 0m4.584s 00:06:17.348 sys 0m0.146s 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.348 20:20:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:17.348 ************************************ 00:06:17.348 END TEST accel_decomp_full_mcore 00:06:17.348 ************************************ 00:06:17.348 20:20:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.348 20:20:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:17.348 20:20:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:17.348 20:20:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.348 20:20:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.607 ************************************ 00:06:17.607 START TEST accel_decomp_mthread 00:06:17.607 ************************************ 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:17.607 [2024-07-15 20:20:09.791421] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:17.607 [2024-07-15 20:20:09.791514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316331 ] 00:06:17.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.607 [2024-07-15 20:20:09.864695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.607 [2024-07-15 20:20:09.937762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.607 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.608 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.866 20:20:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.804 00:06:18.804 real 0m1.349s 00:06:18.804 user 0m1.228s 00:06:18.804 sys 0m0.137s 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.804 20:20:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:18.804 ************************************ 00:06:18.804 END TEST accel_decomp_mthread 00:06:18.804 ************************************ 00:06:18.804 20:20:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.804 20:20:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:18.804 20:20:11 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:18.804 20:20:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.804 20:20:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.064 ************************************ 00:06:19.064 START TEST accel_decomp_full_mthread 00:06:19.064 ************************************ 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:19.064 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:19.064 [2024-07-15 20:20:11.222111] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:19.064 [2024-07-15 20:20:11.222188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316614 ] 00:06:19.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.064 [2024-07-15 20:20:11.292382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.064 [2024-07-15 20:20:11.364582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.065 20:20:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.444 00:06:20.444 real 0m1.362s 00:06:20.444 user 0m1.246s 00:06:20.444 sys 0m0.131s 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.444 20:20:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:20.444 ************************************ 00:06:20.444 END TEST accel_decomp_full_mthread 00:06:20.444 ************************************ 00:06:20.444 20:20:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.444 20:20:12 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:20.444 20:20:12 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:20.444 20:20:12 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.444 20:20:12 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:20.444 20:20:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.444 20:20:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.444 20:20:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.444 20:20:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.444 20:20:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.444 20:20:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.444 20:20:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.444 20:20:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:20.444 20:20:12 accel -- accel/accel.sh@41 -- # jq -r . 00:06:20.444 ************************************ 00:06:20.444 START TEST accel_dif_functional_tests 00:06:20.444 ************************************ 00:06:20.444 20:20:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:20.444 [2024-07-15 20:20:12.672131] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:20.444 [2024-07-15 20:20:12.672213] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316905 ] 00:06:20.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.444 [2024-07-15 20:20:12.741326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.444 [2024-07-15 20:20:12.812820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.444 [2024-07-15 20:20:12.812919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.444 [2024-07-15 20:20:12.812920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.704 00:06:20.704 00:06:20.704 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.704 http://cunit.sourceforge.net/ 00:06:20.704 00:06:20.704 00:06:20.704 Suite: accel_dif 00:06:20.704 Test: verify: DIF generated, GUARD check ...passed 00:06:20.704 Test: verify: DIF generated, APPTAG check ...passed 00:06:20.704 Test: verify: DIF generated, REFTAG check ...passed 00:06:20.704 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:20:12.881518] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:20.704 passed 00:06:20.704 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:20:12.881573] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:20.704 passed 00:06:20.704 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:20:12.881600] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:20.704 passed 00:06:20.704 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:20.704 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:20:12.881666] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:20.704 passed 00:06:20.704 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:20.704 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:20.704 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:20.704 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:20:12.881762] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:20.704 passed 00:06:20.704 Test: verify copy: DIF generated, GUARD check ...passed 00:06:20.704 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:20.704 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:20.704 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:20:12.881874] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:20.704 passed 00:06:20.704 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:20:12.881903] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:20.704 passed 00:06:20.704 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:20:12.881929] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:20.704 passed 00:06:20.704 Test: generate copy: DIF generated, GUARD check ...passed 00:06:20.704 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:20.704 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:20.704 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:20.704 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:20.704 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:20.704 Test: generate copy: iovecs-len validate ...[2024-07-15 20:20:12.882105] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:20.704 passed 00:06:20.704 Test: generate copy: buffer alignment validate ...passed 00:06:20.704 00:06:20.704 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.704 suites 1 1 n/a 0 0 00:06:20.704 tests 26 26 26 0 0 00:06:20.704 asserts 115 115 115 0 n/a 00:06:20.704 00:06:20.704 Elapsed time = 0.002 seconds 00:06:20.704 00:06:20.704 real 0m0.396s 00:06:20.704 user 0m0.559s 00:06:20.704 sys 0m0.152s 00:06:20.704 20:20:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.704 20:20:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:20.704 ************************************ 00:06:20.704 END TEST accel_dif_functional_tests 00:06:20.704 ************************************ 00:06:20.963 20:20:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.963 00:06:20.963 real 0m31.511s 00:06:20.963 user 0m34.540s 00:06:20.963 sys 0m5.105s 00:06:20.963 20:20:13 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.963 20:20:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.963 ************************************ 00:06:20.963 END TEST accel 00:06:20.963 ************************************ 00:06:20.963 20:20:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.963 20:20:13 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:20.963 20:20:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.963 20:20:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.963 20:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.963 ************************************ 00:06:20.963 START TEST accel_rpc 00:06:20.963 ************************************ 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:20.963 * Looking for test storage... 00:06:20.963 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:20.963 20:20:13 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.963 20:20:13 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=317147 00:06:20.963 20:20:13 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 317147 00:06:20.963 20:20:13 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 317147 ']' 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.963 20:20:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.963 [2024-07-15 20:20:13.285451] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:20.963 [2024-07-15 20:20:13.285540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317147 ] 00:06:20.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.222 [2024-07-15 20:20:13.354810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.222 [2024-07-15 20:20:13.430595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.790 20:20:14 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.790 20:20:14 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:21.790 20:20:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:21.790 20:20:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:21.790 20:20:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:21.790 20:20:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:21.790 20:20:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:21.790 20:20:14 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.790 20:20:14 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.790 20:20:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 ************************************ 00:06:21.790 START TEST accel_assign_opcode 00:06:21.790 ************************************ 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 [2024-07-15 20:20:14.136675] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 [2024-07-15 20:20:14.144684] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.790 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.049 software 00:06:22.049 00:06:22.049 real 0m0.231s 00:06:22.049 user 0m0.045s 00:06:22.049 sys 0m0.009s 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.049 20:20:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 ************************************ 00:06:22.049 END TEST accel_assign_opcode 00:06:22.049 ************************************ 00:06:22.049 20:20:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.049 20:20:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 317147 00:06:22.049 20:20:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 317147 ']' 00:06:22.049 20:20:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 317147 00:06:22.049 20:20:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:22.049 20:20:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.049 20:20:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 317147 00:06:22.308 20:20:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.308 20:20:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.308 20:20:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 317147' 00:06:22.308 killing process with pid 317147 00:06:22.308 20:20:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 317147 00:06:22.308 20:20:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 317147 00:06:22.567 00:06:22.567 real 0m1.607s 00:06:22.567 user 0m1.661s 00:06:22.567 sys 0m0.460s 00:06:22.567 20:20:14 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.567 20:20:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.567 END TEST accel_rpc 00:06:22.567 ************************************ 00:06:22.567 20:20:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.567 20:20:14 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.567 20:20:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.567 20:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.567 20:20:14 -- common/autotest_common.sh@10 -- # set +x 00:06:22.567 ************************************ 00:06:22.567 START TEST app_cmdline 00:06:22.567 ************************************ 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.567 * Looking for test storage... 00:06:22.567 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:22.567 20:20:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.567 20:20:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=317557 00:06:22.567 20:20:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 317557 00:06:22.567 20:20:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 317557 ']' 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.567 20:20:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.826 [2024-07-15 20:20:14.965995] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:22.826 [2024-07-15 20:20:14.966053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317557 ] 00:06:22.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.826 [2024-07-15 20:20:15.033221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.826 [2024-07-15 20:20:15.105180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:23.763 { 00:06:23.763 "version": "SPDK v24.09-pre git sha1 6c0846996", 00:06:23.763 "fields": { 00:06:23.763 "major": 24, 00:06:23.763 "minor": 9, 00:06:23.763 "patch": 0, 00:06:23.763 "suffix": "-pre", 00:06:23.763 "commit": "6c0846996" 00:06:23.763 } 00:06:23.763 } 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:23.763 20:20:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:23.763 20:20:15 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.763 request: 00:06:23.763 { 00:06:23.763 "method": "env_dpdk_get_mem_stats", 00:06:23.763 "req_id": 1 00:06:23.763 } 00:06:23.763 Got JSON-RPC error response 00:06:23.763 response: 00:06:23.763 { 00:06:23.763 "code": -32601, 00:06:23.763 "message": "Method not found" 00:06:23.763 } 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.021 20:20:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 317557 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 317557 ']' 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 317557 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 317557 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 317557' 00:06:24.021 killing process with pid 317557 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@967 -- # kill 317557 00:06:24.021 20:20:16 app_cmdline -- common/autotest_common.sh@972 -- # wait 317557 00:06:24.280 00:06:24.280 real 0m1.676s 00:06:24.280 user 0m1.930s 00:06:24.280 sys 0m0.485s 00:06:24.280 20:20:16 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.280 20:20:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.280 ************************************ 00:06:24.280 END TEST app_cmdline 00:06:24.280 ************************************ 00:06:24.280 20:20:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.280 20:20:16 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:24.280 20:20:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.280 20:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.280 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:24.280 ************************************ 00:06:24.280 START TEST version 00:06:24.280 ************************************ 00:06:24.280 20:20:16 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:24.540 * Looking for test storage... 00:06:24.540 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:24.540 20:20:16 version -- app/version.sh@17 -- # get_header_version major 00:06:24.540 20:20:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:24.540 20:20:16 version -- app/version.sh@14 -- # cut -f2 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.541 20:20:16 version -- app/version.sh@17 -- # major=24 00:06:24.541 20:20:16 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.541 20:20:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # cut -f2 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.541 20:20:16 version -- app/version.sh@18 -- # minor=9 00:06:24.541 20:20:16 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.541 20:20:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # cut -f2 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.541 20:20:16 version -- app/version.sh@19 -- # patch=0 00:06:24.541 20:20:16 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.541 20:20:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # cut -f2 00:06:24.541 20:20:16 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.541 20:20:16 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.541 20:20:16 version -- app/version.sh@22 -- # version=24.9 00:06:24.541 20:20:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.541 20:20:16 version -- app/version.sh@28 -- # version=24.9rc0 00:06:24.541 20:20:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:24.541 20:20:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.541 20:20:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:24.541 20:20:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:24.541 00:06:24.541 real 0m0.176s 00:06:24.541 user 0m0.090s 00:06:24.541 sys 0m0.132s 00:06:24.541 20:20:16 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.541 20:20:16 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.541 ************************************ 00:06:24.541 END TEST version 00:06:24.541 ************************************ 00:06:24.541 20:20:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.541 20:20:16 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@198 -- # uname -s 00:06:24.541 20:20:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:24.541 20:20:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:24.541 20:20:16 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:24.541 20:20:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:24.541 20:20:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.541 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:24.541 20:20:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:06:24.541 20:20:16 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:24.541 20:20:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:06:24.541 20:20:16 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:06:24.541 20:20:16 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:24.541 20:20:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.541 20:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.541 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:24.541 ************************************ 00:06:24.541 START TEST llvm_fuzz 00:06:24.541 ************************************ 00:06:24.541 20:20:16 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:24.799 * Looking for test storage... 00:06:24.799 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:24.800 20:20:16 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:24.800 20:20:16 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:24.800 20:20:16 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:24.800 20:20:16 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:24.800 20:20:16 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:24.800 20:20:16 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:24.800 20:20:16 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:24.800 20:20:16 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:24.800 20:20:16 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:24.800 20:20:16 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:24.800 20:20:17 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:24.800 20:20:17 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.800 20:20:17 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.800 20:20:17 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:24.800 ************************************ 00:06:24.800 START TEST nvmf_llvm_fuzz 00:06:24.800 ************************************ 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:24.800 * Looking for test storage... 00:06:24.800 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:24.800 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:24.801 #define SPDK_CONFIG_H 00:06:24.801 #define SPDK_CONFIG_APPS 1 00:06:24.801 #define SPDK_CONFIG_ARCH native 00:06:24.801 #undef SPDK_CONFIG_ASAN 00:06:24.801 #undef SPDK_CONFIG_AVAHI 00:06:24.801 #undef SPDK_CONFIG_CET 00:06:24.801 #define SPDK_CONFIG_COVERAGE 1 00:06:24.801 #define SPDK_CONFIG_CROSS_PREFIX 00:06:24.801 #undef SPDK_CONFIG_CRYPTO 00:06:24.801 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:24.801 #undef SPDK_CONFIG_CUSTOMOCF 00:06:24.801 #undef SPDK_CONFIG_DAOS 00:06:24.801 #define SPDK_CONFIG_DAOS_DIR 00:06:24.801 #define SPDK_CONFIG_DEBUG 1 00:06:24.801 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:24.801 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:24.801 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:24.801 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:24.801 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:24.801 #undef SPDK_CONFIG_DPDK_UADK 00:06:24.801 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:24.801 #define SPDK_CONFIG_EXAMPLES 1 00:06:24.801 #undef SPDK_CONFIG_FC 00:06:24.801 #define SPDK_CONFIG_FC_PATH 00:06:24.801 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:24.801 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:24.801 #undef SPDK_CONFIG_FUSE 00:06:24.801 #define SPDK_CONFIG_FUZZER 1 00:06:24.801 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:24.801 #undef SPDK_CONFIG_GOLANG 00:06:24.801 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:24.801 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:24.801 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:24.801 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:24.801 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:24.801 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:24.801 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:24.801 #define SPDK_CONFIG_IDXD 1 00:06:24.801 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:24.801 #undef SPDK_CONFIG_IPSEC_MB 00:06:24.801 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:24.801 #define SPDK_CONFIG_ISAL 1 00:06:24.801 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:24.801 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:24.801 #define SPDK_CONFIG_LIBDIR 00:06:24.801 #undef SPDK_CONFIG_LTO 00:06:24.801 #define SPDK_CONFIG_MAX_LCORES 128 00:06:24.801 #define SPDK_CONFIG_NVME_CUSE 1 00:06:24.801 #undef SPDK_CONFIG_OCF 00:06:24.801 #define SPDK_CONFIG_OCF_PATH 00:06:24.801 #define SPDK_CONFIG_OPENSSL_PATH 00:06:24.801 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:24.801 #define SPDK_CONFIG_PGO_DIR 00:06:24.801 #undef SPDK_CONFIG_PGO_USE 00:06:24.801 #define SPDK_CONFIG_PREFIX /usr/local 00:06:24.801 #undef SPDK_CONFIG_RAID5F 00:06:24.801 #undef SPDK_CONFIG_RBD 00:06:24.801 #define SPDK_CONFIG_RDMA 1 00:06:24.801 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:24.801 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:24.801 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:24.801 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:24.801 #undef SPDK_CONFIG_SHARED 00:06:24.801 #undef SPDK_CONFIG_SMA 00:06:24.801 #define SPDK_CONFIG_TESTS 1 00:06:24.801 #undef SPDK_CONFIG_TSAN 00:06:24.801 #define SPDK_CONFIG_UBLK 1 00:06:24.801 #define SPDK_CONFIG_UBSAN 1 00:06:24.801 #undef SPDK_CONFIG_UNIT_TESTS 00:06:24.801 #undef SPDK_CONFIG_URING 00:06:24.801 #define SPDK_CONFIG_URING_PATH 00:06:24.801 #undef SPDK_CONFIG_URING_ZNS 00:06:24.801 #undef SPDK_CONFIG_USDT 00:06:24.801 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:24.801 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:24.801 #define SPDK_CONFIG_VFIO_USER 1 00:06:24.801 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:24.801 #define SPDK_CONFIG_VHOST 1 00:06:24.801 #define SPDK_CONFIG_VIRTIO 1 00:06:24.801 #undef SPDK_CONFIG_VTUNE 00:06:24.801 #define SPDK_CONFIG_VTUNE_DIR 00:06:24.801 #define SPDK_CONFIG_WERROR 1 00:06:24.801 #define SPDK_CONFIG_WPDK_DIR 00:06:24.801 #undef SPDK_CONFIG_XNVME 00:06:24.801 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:24.801 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:25.063 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:25.064 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 317993 ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 317993 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.l1gs4e 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.l1gs4e/tests/nvmf /tmp/spdk.l1gs4e 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=954408960 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4330020864 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=53961662464 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742317568 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7780655104 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866448384 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342484992 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5980160 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870216704 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=942080 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:25.065 * Looking for test storage... 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=53961662464 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9995247616 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:25.065 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:25.065 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:25.066 20:20:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:25.066 [2024-07-15 20:20:17.365041] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:25.066 [2024-07-15 20:20:17.365114] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318033 ] 00:06:25.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.324 [2024-07-15 20:20:17.626043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.582 [2024-07-15 20:20:17.709305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.582 [2024-07-15 20:20:17.768834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.582 [2024-07-15 20:20:17.785114] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:25.582 INFO: Running with entropic power schedule (0xFF, 100). 00:06:25.582 INFO: Seed: 961975786 00:06:25.582 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:25.582 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:25.582 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:25.582 INFO: A corpus is not provided, starting from an empty corpus 00:06:25.582 #2 INITED exec/s: 0 rss: 63Mb 00:06:25.582 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:25.582 This may also happen if the target rejected all inputs we tried so far 00:06:25.582 [2024-07-15 20:20:17.855446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.582 [2024-07-15 20:20:17.855487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.841 NEW_FUNC[1/697]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:25.841 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:25.841 #8 NEW cov: 11918 ft: 11919 corp: 2/107b lim: 320 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:06:25.841 [2024-07-15 20:20:18.195417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.841 [2024-07-15 20:20:18.195460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 #19 NEW cov: 12031 ft: 12542 corp: 3/213b lim: 320 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 CopyPart- 00:06:26.099 [2024-07-15 20:20:18.245558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.099 [2024-07-15 20:20:18.245589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 #20 NEW cov: 12037 ft: 12734 corp: 4/319b lim: 320 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 ShuffleBytes- 00:06:26.099 [2024-07-15 20:20:18.295697] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.099 [2024-07-15 20:20:18.295725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 #21 NEW cov: 12122 ft: 13192 corp: 5/425b lim: 320 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 ShuffleBytes- 00:06:26.099 [2024-07-15 20:20:18.335798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.099 [2024-07-15 20:20:18.335827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 #22 NEW cov: 12122 ft: 13249 corp: 6/539b lim: 320 exec/s: 0 rss: 70Mb L: 114/114 MS: 1 CMP- DE: "\235\206\373\013\270?+\000"- 00:06:26.099 [2024-07-15 20:20:18.386108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.099 [2024-07-15 20:20:18.386136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 [2024-07-15 20:20:18.386260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.099 [2024-07-15 20:20:18.386277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.099 NEW_FUNC[1/1]: 0x138ec70 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2047 00:06:26.099 #26 NEW cov: 12165 ft: 13560 corp: 7/667b lim: 320 exec/s: 0 rss: 70Mb L: 128/128 MS: 4 CopyPart-CrossOver-ChangeBit-InsertRepeatedBytes- 00:06:26.099 [2024-07-15 20:20:18.426205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.099 [2024-07-15 20:20:18.426232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 [2024-07-15 20:20:18.426373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.099 [2024-07-15 20:20:18.426390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.099 #27 NEW cov: 12165 ft: 13624 corp: 8/795b lim: 320 exec/s: 0 rss: 71Mb L: 128/128 MS: 1 ChangeByte- 00:06:26.099 [2024-07-15 20:20:18.476493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.099 [2024-07-15 20:20:18.476519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.099 [2024-07-15 20:20:18.476633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.099 [2024-07-15 20:20:18.476650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.358 #28 NEW cov: 12176 ft: 13719 corp: 9/942b lim: 320 exec/s: 0 rss: 71Mb L: 147/147 MS: 1 CrossOver- 00:06:26.358 [2024-07-15 20:20:18.516367] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.359 [2024-07-15 20:20:18.516396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.359 #29 NEW cov: 12176 ft: 13741 corp: 10/1056b lim: 320 exec/s: 0 rss: 71Mb L: 114/147 MS: 1 CMP- DE: "M\365\254F\270?+\000"- 00:06:26.359 [2024-07-15 20:20:18.556555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.359 [2024-07-15 20:20:18.556587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.359 #30 NEW cov: 12176 ft: 13855 corp: 11/1162b lim: 320 exec/s: 0 rss: 71Mb L: 106/147 MS: 1 CopyPart- 00:06:26.359 [2024-07-15 20:20:18.606625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.359 [2024-07-15 20:20:18.606652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.359 #31 NEW cov: 12176 ft: 13945 corp: 12/1276b lim: 320 exec/s: 0 rss: 71Mb L: 114/147 MS: 1 ChangeByte- 00:06:26.359 [2024-07-15 20:20:18.656809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.359 [2024-07-15 20:20:18.656837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.359 #32 NEW cov: 12176 ft: 13996 corp: 13/1360b lim: 320 exec/s: 0 rss: 71Mb L: 84/147 MS: 1 EraseBytes- 00:06:26.359 [2024-07-15 20:20:18.706905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.359 [2024-07-15 20:20:18.706933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.359 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:26.359 #33 NEW cov: 12199 ft: 14064 corp: 14/1474b lim: 320 exec/s: 0 rss: 71Mb L: 114/147 MS: 1 ChangeByte- 00:06:26.618 [2024-07-15 20:20:18.747024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.618 [2024-07-15 20:20:18.747052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.618 #34 NEW cov: 12199 ft: 14099 corp: 15/1588b lim: 320 exec/s: 0 rss: 71Mb L: 114/147 MS: 1 ChangeBinInt- 00:06:26.618 [2024-07-15 20:20:18.797147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:55555555 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5555555555555555 00:06:26.618 [2024-07-15 20:20:18.797176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.618 #37 NEW cov: 12199 ft: 14104 corp: 16/1680b lim: 320 exec/s: 0 rss: 71Mb L: 92/147 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:06:26.618 [2024-07-15 20:20:18.837382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.618 [2024-07-15 20:20:18.837410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.618 #38 NEW cov: 12199 ft: 14166 corp: 17/1802b lim: 320 exec/s: 38 rss: 71Mb L: 122/147 MS: 1 PersAutoDict- DE: "M\365\254F\270?+\000"- 00:06:26.618 [2024-07-15 20:20:18.887427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:55555555 SGL TRANSPORT DATA BLOCK TRANSPORT 0x5555555555555555 00:06:26.618 [2024-07-15 20:20:18.887460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.618 #44 NEW cov: 12199 ft: 14216 corp: 18/1894b lim: 320 exec/s: 44 rss: 71Mb L: 92/147 MS: 1 ChangeByte- 00:06:26.618 [2024-07-15 20:20:18.937565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.618 [2024-07-15 20:20:18.937592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.618 #45 NEW cov: 12199 ft: 14240 corp: 19/2000b lim: 320 exec/s: 45 rss: 71Mb L: 106/147 MS: 1 ShuffleBytes- 00:06:26.618 [2024-07-15 20:20:18.977687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ff 00:06:26.618 [2024-07-15 20:20:18.977718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 #46 NEW cov: 12199 ft: 14264 corp: 20/2106b lim: 320 exec/s: 46 rss: 71Mb L: 106/147 MS: 1 CMP- DE: "\377\377\377\001"- 00:06:26.878 [2024-07-15 20:20:19.027922] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.878 [2024-07-15 20:20:19.027951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 #47 NEW cov: 12199 ft: 14301 corp: 21/2220b lim: 320 exec/s: 47 rss: 71Mb L: 114/147 MS: 1 ChangeByte- 00:06:26.878 [2024-07-15 20:20:19.067559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.878 [2024-07-15 20:20:19.067587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 #48 NEW cov: 12199 ft: 14327 corp: 22/2326b lim: 320 exec/s: 48 rss: 72Mb L: 106/147 MS: 1 ChangeBinInt- 00:06:26.878 [2024-07-15 20:20:19.118102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ff 00:06:26.878 [2024-07-15 20:20:19.118131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 #49 NEW cov: 12199 ft: 14369 corp: 23/2432b lim: 320 exec/s: 49 rss: 72Mb L: 106/147 MS: 1 ChangeByte- 00:06:26.878 [2024-07-15 20:20:19.168238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:26.878 [2024-07-15 20:20:19.168267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 #55 NEW cov: 12199 ft: 14383 corp: 24/2550b lim: 320 exec/s: 55 rss: 72Mb L: 118/147 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:26.878 [2024-07-15 20:20:19.208641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.878 [2024-07-15 20:20:19.208670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 [2024-07-15 20:20:19.208782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.878 [2024-07-15 20:20:19.208799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.878 #56 NEW cov: 12199 ft: 14395 corp: 25/2697b lim: 320 exec/s: 56 rss: 72Mb L: 147/147 MS: 1 ShuffleBytes- 00:06:26.878 [2024-07-15 20:20:19.258773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.878 [2024-07-15 20:20:19.258800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.878 [2024-07-15 20:20:19.258926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.878 [2024-07-15 20:20:19.258942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.138 #57 NEW cov: 12199 ft: 14481 corp: 26/2846b lim: 320 exec/s: 57 rss: 72Mb L: 149/149 MS: 1 InsertRepeatedBytes- 00:06:27.138 [2024-07-15 20:20:19.298792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.138 [2024-07-15 20:20:19.298822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.138 [2024-07-15 20:20:19.298932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.138 [2024-07-15 20:20:19.298952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.138 #58 NEW cov: 12199 ft: 14490 corp: 27/2990b lim: 320 exec/s: 58 rss: 72Mb L: 144/149 MS: 1 InsertRepeatedBytes- 00:06:27.138 [2024-07-15 20:20:19.348996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.138 [2024-07-15 20:20:19.349025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.138 [2024-07-15 20:20:19.349163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:002d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.138 [2024-07-15 20:20:19.349180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.138 #59 NEW cov: 12199 ft: 14493 corp: 28/3140b lim: 320 exec/s: 59 rss: 72Mb L: 150/150 MS: 1 InsertByte- 00:06:27.138 [2024-07-15 20:20:19.399013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.138 [2024-07-15 20:20:19.399042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.138 #60 NEW cov: 12199 ft: 14515 corp: 29/3246b lim: 320 exec/s: 60 rss: 72Mb L: 106/150 MS: 1 CrossOver- 00:06:27.138 [2024-07-15 20:20:19.439104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:27.138 [2024-07-15 20:20:19.439133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.138 #61 NEW cov: 12199 ft: 14548 corp: 30/3360b lim: 320 exec/s: 61 rss: 72Mb L: 114/150 MS: 1 ShuffleBytes- 00:06:27.138 [2024-07-15 20:20:19.489449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.138 [2024-07-15 20:20:19.489477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.138 [2024-07-15 20:20:19.489597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.138 [2024-07-15 20:20:19.489614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.138 #62 NEW cov: 12199 ft: 14552 corp: 31/3507b lim: 320 exec/s: 62 rss: 72Mb L: 147/150 MS: 1 ChangeBit- 00:06:27.398 [2024-07-15 20:20:19.529355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.398 [2024-07-15 20:20:19.529385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.398 #63 NEW cov: 12199 ft: 14559 corp: 32/3613b lim: 320 exec/s: 63 rss: 72Mb L: 106/150 MS: 1 ChangeByte- 00:06:27.398 [2024-07-15 20:20:19.569623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.398 [2024-07-15 20:20:19.569651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.398 [2024-07-15 20:20:19.569794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.398 [2024-07-15 20:20:19.569817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.398 #64 NEW cov: 12199 ft: 14568 corp: 33/3741b lim: 320 exec/s: 64 rss: 72Mb L: 128/150 MS: 1 ChangeBinInt- 00:06:27.398 [2024-07-15 20:20:19.619670] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.398 [2024-07-15 20:20:19.619697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.398 #65 NEW cov: 12199 ft: 14574 corp: 34/3847b lim: 320 exec/s: 65 rss: 72Mb L: 106/150 MS: 1 ChangeBinInt- 00:06:27.399 [2024-07-15 20:20:19.670360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:27.399 [2024-07-15 20:20:19.670387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 [2024-07-15 20:20:19.670508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a 00:06:27.399 [2024-07-15 20:20:19.670527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.399 [2024-07-15 20:20:19.670659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9d) qid:0 cid:6 nsid:2b3fb8 cdw10:f8ffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:27.399 [2024-07-15 20:20:19.670677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.399 NEW_FUNC[1/1]: 0x17c0860 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:27.399 #66 NEW cov: 12213 ft: 15114 corp: 35/4075b lim: 320 exec/s: 66 rss: 73Mb L: 228/228 MS: 1 CrossOver- 00:06:27.399 [2024-07-15 20:20:19.709870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b3fb8 00:06:27.399 [2024-07-15 20:20:19.709898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 #67 NEW cov: 12213 ft: 15124 corp: 36/4189b lim: 320 exec/s: 67 rss: 73Mb L: 114/228 MS: 1 ChangeByte- 00:06:27.399 [2024-07-15 20:20:19.760038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:9d9d9d9d SGL TRANSPORT DATA BLOCK TRANSPORT 0x9d9d9d9d9d9d9d9d 00:06:27.399 [2024-07-15 20:20:19.760066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 #72 NEW cov: 12213 ft: 15131 corp: 37/4303b lim: 320 exec/s: 72 rss: 73Mb L: 114/228 MS: 5 PersAutoDict-CopyPart-ChangeBinInt-CopyPart-InsertRepeatedBytes- DE: "M\365\254F\270?+\000"- 00:06:27.659 [2024-07-15 20:20:19.800244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.659 [2024-07-15 20:20:19.800272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.659 #73 NEW cov: 12213 ft: 15172 corp: 38/4409b lim: 320 exec/s: 36 rss: 73Mb L: 106/228 MS: 1 ShuffleBytes- 00:06:27.659 #73 DONE cov: 12213 ft: 15172 corp: 38/4409b lim: 320 exec/s: 36 rss: 73Mb 00:06:27.659 ###### Recommended dictionary. ###### 00:06:27.659 "\235\206\373\013\270?+\000" # Uses: 0 00:06:27.659 "M\365\254F\270?+\000" # Uses: 2 00:06:27.659 "\377\377\377\001" # Uses: 0 00:06:27.659 "\000\000\000\000" # Uses: 0 00:06:27.659 ###### End of recommended dictionary. ###### 00:06:27.659 Done 73 runs in 2 second(s) 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:27.659 20:20:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:27.659 [2024-07-15 20:20:20.003816] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:27.659 [2024-07-15 20:20:20.003900] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318568 ] 00:06:27.659 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.919 [2024-07-15 20:20:20.264126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.179 [2024-07-15 20:20:20.358604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.179 [2024-07-15 20:20:20.418368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.179 [2024-07-15 20:20:20.434658] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:28.179 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.179 INFO: Seed: 3611955650 00:06:28.179 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:28.179 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:28.179 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:28.179 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.179 #2 INITED exec/s: 0 rss: 63Mb 00:06:28.179 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.179 This may also happen if the target rejected all inputs we tried so far 00:06:28.179 [2024-07-15 20:20:20.484216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.179 [2024-07-15 20:20:20.484246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.179 [2024-07-15 20:20:20.484319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.179 [2024-07-15 20:20:20.484334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.179 [2024-07-15 20:20:20.484388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.179 [2024-07-15 20:20:20.484402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.179 [2024-07-15 20:20:20.484462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.179 [2024-07-15 20:20:20.484476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.438 NEW_FUNC[1/698]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:28.438 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:28.438 #6 NEW cov: 12026 ft: 12011 corp: 2/27b lim: 30 exec/s: 0 rss: 70Mb L: 26/26 MS: 4 ChangeBit-CopyPart-ChangeByte-InsertRepeatedBytes- 00:06:28.438 [2024-07-15 20:20:20.814936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.438 [2024-07-15 20:20:20.814975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.438 [2024-07-15 20:20:20.815037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.439 [2024-07-15 20:20:20.815054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.439 [2024-07-15 20:20:20.815123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.439 [2024-07-15 20:20:20.815139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.699 #9 NEW cov: 12139 ft: 13068 corp: 3/47b lim: 30 exec/s: 0 rss: 70Mb L: 20/26 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:28.699 [2024-07-15 20:20:20.854641] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (20) > len (4) 00:06:28.699 [2024-07-15 20:20:20.854958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.854984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:20.855041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.855056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:20.855111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.855125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.699 #10 NEW cov: 12152 ft: 13371 corp: 4/67b lim: 30 exec/s: 0 rss: 70Mb L: 20/26 MS: 1 ChangeBinInt- 00:06:28.699 [2024-07-15 20:20:20.904776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.904801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.699 #11 NEW cov: 12237 ft: 14072 corp: 5/77b lim: 30 exec/s: 0 rss: 70Mb L: 10/26 MS: 1 CrossOver- 00:06:28.699 [2024-07-15 20:20:20.945094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.945119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:20.945176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.945190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.699 #12 NEW cov: 12237 ft: 14449 corp: 6/92b lim: 30 exec/s: 0 rss: 70Mb L: 15/26 MS: 1 EraseBytes- 00:06:28.699 [2024-07-15 20:20:20.985101] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (5120) > len (4) 00:06:28.699 [2024-07-15 20:20:20.985317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.985342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:20.985401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.985415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:20.985468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:20.985482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.699 #13 NEW cov: 12237 ft: 14506 corp: 7/113b lim: 30 exec/s: 0 rss: 70Mb L: 21/26 MS: 1 CrossOver- 00:06:28.699 [2024-07-15 20:20:21.035436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:21.035467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:21.035525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:21.035539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.699 [2024-07-15 20:20:21.035596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.699 [2024-07-15 20:20:21.035609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.699 #14 NEW cov: 12237 ft: 14570 corp: 8/132b lim: 30 exec/s: 0 rss: 71Mb L: 19/26 MS: 1 CopyPart- 00:06:28.959 [2024-07-15 20:20:21.085493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.085517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.085572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.085586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.959 #15 NEW cov: 12237 ft: 14614 corp: 9/144b lim: 30 exec/s: 0 rss: 71Mb L: 12/26 MS: 1 CopyPart- 00:06:28.959 [2024-07-15 20:20:21.125546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.125571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.125627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.125640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.959 #16 NEW cov: 12237 ft: 14673 corp: 10/156b lim: 30 exec/s: 0 rss: 71Mb L: 12/26 MS: 1 ShuffleBytes- 00:06:28.959 [2024-07-15 20:20:21.175984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.176012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.176068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.176082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.176136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.176149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.176203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.176216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.959 #17 NEW cov: 12237 ft: 14723 corp: 11/182b lim: 30 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 ShuffleBytes- 00:06:28.959 [2024-07-15 20:20:21.216089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.216113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.216184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.216198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.959 [2024-07-15 20:20:21.216253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.959 [2024-07-15 20:20:21.216267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.960 [2024-07-15 20:20:21.216322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.216336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.960 #18 NEW cov: 12237 ft: 14744 corp: 12/210b lim: 30 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 CopyPart- 00:06:28.960 [2024-07-15 20:20:21.265920] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (23808) > len (4) 00:06:28.960 [2024-07-15 20:20:21.266225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.266250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.960 [2024-07-15 20:20:21.266306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.266320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.960 [2024-07-15 20:20:21.266375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.266389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.960 [2024-07-15 20:20:21.266447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.266465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.960 #19 NEW cov: 12237 ft: 14757 corp: 13/237b lim: 30 exec/s: 0 rss: 71Mb L: 27/28 MS: 1 InsertByte- 00:06:28.960 [2024-07-15 20:20:21.315822] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:28.960 [2024-07-15 20:20:21.316031] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (20484) > buf size (4096) 00:06:28.960 [2024-07-15 20:20:21.316246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.316271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.960 [2024-07-15 20:20:21.316342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.316357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.960 [2024-07-15 20:20:21.316412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:14000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.960 [2024-07-15 20:20:21.316426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.960 #20 NEW cov: 12245 ft: 14814 corp: 14/258b lim: 30 exec/s: 0 rss: 71Mb L: 21/28 MS: 1 CrossOver- 00:06:29.220 [2024-07-15 20:20:21.356212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.356237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.220 [2024-07-15 20:20:21.356307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.356322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.220 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:29.220 #21 NEW cov: 12268 ft: 14876 corp: 15/270b lim: 30 exec/s: 0 rss: 71Mb L: 12/28 MS: 1 ShuffleBytes- 00:06:29.220 [2024-07-15 20:20:21.396065] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.220 [2024-07-15 20:20:21.396274] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (20484) > buf size (4096) 00:06:29.220 [2024-07-15 20:20:21.396477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.396502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.220 [2024-07-15 20:20:21.396560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.396575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.220 [2024-07-15 20:20:21.396629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:14000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.396643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.220 #22 NEW cov: 12268 ft: 14931 corp: 16/291b lim: 30 exec/s: 0 rss: 71Mb L: 21/28 MS: 1 ChangeBit- 00:06:29.220 [2024-07-15 20:20:21.446246] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:06:29.220 [2024-07-15 20:20:21.446365] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.220 [2024-07-15 20:20:21.446481] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (65540) > buf size (4096) 00:06:29.220 [2024-07-15 20:20:21.446778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.446803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.220 [2024-07-15 20:20:21.446858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.446873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.220 [2024-07-15 20:20:21.446928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.446942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.220 [2024-07-15 20:20:21.446997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00140000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.447011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.220 #23 NEW cov: 12274 ft: 14951 corp: 17/319b lim: 30 exec/s: 23 rss: 71Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:29.220 [2024-07-15 20:20:21.496580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.220 [2024-07-15 20:20:21.496604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.221 [2024-07-15 20:20:21.496659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.496673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.221 #24 NEW cov: 12274 ft: 14972 corp: 18/334b lim: 30 exec/s: 24 rss: 71Mb L: 15/28 MS: 1 ChangeBinInt- 00:06:29.221 [2024-07-15 20:20:21.546870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.546895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.221 [2024-07-15 20:20:21.546951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.546965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.221 [2024-07-15 20:20:21.547018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.547031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.221 #25 NEW cov: 12274 ft: 14997 corp: 19/357b lim: 30 exec/s: 25 rss: 72Mb L: 23/28 MS: 1 CrossOver- 00:06:29.221 [2024-07-15 20:20:21.596657] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:06:29.221 [2024-07-15 20:20:21.596775] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.221 [2024-07-15 20:20:21.596882] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (65540) > buf size (4096) 00:06:29.221 [2024-07-15 20:20:21.597179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.597207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.221 [2024-07-15 20:20:21.597265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.597280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.221 [2024-07-15 20:20:21.597334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.597349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.221 [2024-07-15 20:20:21.597401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00140000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.221 [2024-07-15 20:20:21.597416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.509 #26 NEW cov: 12274 ft: 15013 corp: 20/385b lim: 30 exec/s: 26 rss: 72Mb L: 28/28 MS: 1 ChangeBinInt- 00:06:29.510 [2024-07-15 20:20:21.647185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.647211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.647268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.647282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.647336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.647350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.510 #27 NEW cov: 12274 ft: 15016 corp: 21/405b lim: 30 exec/s: 27 rss: 72Mb L: 20/28 MS: 1 CopyPart- 00:06:29.510 [2024-07-15 20:20:21.686873] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (93188) > buf size (4096) 00:06:29.510 [2024-07-15 20:20:21.687172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.687197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.687252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.687267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.510 #28 NEW cov: 12274 ft: 15083 corp: 22/417b lim: 30 exec/s: 28 rss: 72Mb L: 12/28 MS: 1 ChangeByte- 00:06:29.510 [2024-07-15 20:20:21.737033] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.510 [2024-07-15 20:20:21.737237] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (20484) > buf size (4096) 00:06:29.510 [2024-07-15 20:20:21.737446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.737471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.737530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.737544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.737600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:14000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.737614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.510 #29 NEW cov: 12274 ft: 15107 corp: 23/438b lim: 30 exec/s: 29 rss: 72Mb L: 21/28 MS: 1 ShuffleBytes- 00:06:29.510 [2024-07-15 20:20:21.777549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.777573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.777629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.777643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.777698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.777712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.510 #30 NEW cov: 12274 ft: 15141 corp: 24/459b lim: 30 exec/s: 30 rss: 72Mb L: 21/28 MS: 1 CrossOver- 00:06:29.510 [2024-07-15 20:20:21.817339] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2048) > len (4) 00:06:29.510 [2024-07-15 20:20:21.817573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.817598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.817656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.817671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.510 #31 NEW cov: 12274 ft: 15164 corp: 25/474b lim: 30 exec/s: 31 rss: 72Mb L: 15/28 MS: 1 ChangeBit- 00:06:29.510 [2024-07-15 20:20:21.857411] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x8e8e 00:06:29.510 [2024-07-15 20:20:21.857531] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.510 [2024-07-15 20:20:21.857633] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (65540) > buf size (4096) 00:06:29.510 [2024-07-15 20:20:21.857923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.857948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.858075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.858087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.858104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:40000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.858114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.510 [2024-07-15 20:20:21.858134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00140000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.510 [2024-07-15 20:20:21.858145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.510 #32 NEW cov: 12274 ft: 15179 corp: 26/502b lim: 30 exec/s: 32 rss: 72Mb L: 28/28 MS: 1 CMP- DE: "\013\000\000\000"- 00:06:29.770 [2024-07-15 20:20:21.897541] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:06:29.770 [2024-07-15 20:20:21.897672] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:21.898066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.898090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.898147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.898161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.898214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00400000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.898229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.898280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.898294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.770 #33 NEW cov: 12274 ft: 15195 corp: 27/531b lim: 30 exec/s: 33 rss: 72Mb L: 29/29 MS: 1 InsertByte- 00:06:29.770 [2024-07-15 20:20:21.937671] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:06:29.770 [2024-07-15 20:20:21.937785] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:21.937990] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:21.938309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.938334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.938391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.938405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.938460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00400000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.938473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.938528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000214 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.938542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.938594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.938610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:29.770 #34 NEW cov: 12274 ft: 15282 corp: 28/561b lim: 30 exec/s: 34 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:29.770 [2024-07-15 20:20:21.987791] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:06:29.770 [2024-07-15 20:20:21.987907] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:21.988310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.988335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.988391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.988405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.988460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.988475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:21.988528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:21.988541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.770 #35 NEW cov: 12274 ft: 15296 corp: 29/590b lim: 30 exec/s: 35 rss: 72Mb L: 29/30 MS: 1 CMP- DE: "\000\000\000\006"- 00:06:29.770 [2024-07-15 20:20:22.027887] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:22.028109] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (20484) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:22.028318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.028342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.028400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.028414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.028471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:14000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.028486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.770 #36 NEW cov: 12274 ft: 15325 corp: 30/611b lim: 30 exec/s: 36 rss: 72Mb L: 21/30 MS: 1 CrossOver- 00:06:29.770 [2024-07-15 20:20:22.067930] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:22.068237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.068262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.068315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.068333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.770 #37 NEW cov: 12274 ft: 15332 corp: 31/627b lim: 30 exec/s: 37 rss: 72Mb L: 16/30 MS: 1 CrossOver- 00:06:29.770 [2024-07-15 20:20:22.118224] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e8e 00:06:29.770 [2024-07-15 20:20:22.118342] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (145980) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:22.118551] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:06:29.770 [2024-07-15 20:20:22.118860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a8e028e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.118885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.118942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:8e8e0030 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.118956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.119009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.119023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.119076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000214 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.119090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.770 [2024-07-15 20:20:22.119145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.770 [2024-07-15 20:20:22.119159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:30.031 #38 NEW cov: 12274 ft: 15338 corp: 32/657b lim: 30 exec/s: 38 rss: 72Mb L: 30/30 MS: 1 CrossOver- 00:06:30.031 [2024-07-15 20:20:22.168338] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2048) > len (4) 00:06:30.031 [2024-07-15 20:20:22.168587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.168612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.168670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.168685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.031 #39 NEW cov: 12274 ft: 15357 corp: 33/672b lim: 30 exec/s: 39 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:06:30.031 [2024-07-15 20:20:22.218708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.218733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.218784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.218798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.031 #40 NEW cov: 12274 ft: 15366 corp: 34/689b lim: 30 exec/s: 40 rss: 72Mb L: 17/30 MS: 1 CrossOver- 00:06:30.031 [2024-07-15 20:20:22.258525] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524292) > buf size (4096) 00:06:30.031 [2024-07-15 20:20:22.258941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.258967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.259022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.259036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.259090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.259104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.031 #41 NEW cov: 12274 ft: 15382 corp: 35/710b lim: 30 exec/s: 41 rss: 72Mb L: 21/30 MS: 1 ChangeBit- 00:06:30.031 [2024-07-15 20:20:22.309127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.309151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.309208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.309222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.309275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000014 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.309288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.031 #42 NEW cov: 12274 ft: 15392 corp: 36/733b lim: 30 exec/s: 42 rss: 72Mb L: 23/30 MS: 1 InsertRepeatedBytes- 00:06:30.031 [2024-07-15 20:20:22.349289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.349314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.349371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.349385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.349439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.349457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.349511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.349524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.031 #43 NEW cov: 12274 ft: 15404 corp: 37/758b lim: 30 exec/s: 43 rss: 72Mb L: 25/30 MS: 1 CopyPart- 00:06:30.031 [2024-07-15 20:20:22.389085] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (5120) > len (4) 00:06:30.031 [2024-07-15 20:20:22.389321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.389349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.389407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.389421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.031 [2024-07-15 20:20:22.389473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.031 [2024-07-15 20:20:22.389488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.291 #44 NEW cov: 12274 ft: 15423 corp: 38/779b lim: 30 exec/s: 44 rss: 72Mb L: 21/30 MS: 1 ChangeBit- 00:06:30.291 [2024-07-15 20:20:22.439096] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:30.291 [2024-07-15 20:20:22.439312] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (5120) > len (4) 00:06:30.291 [2024-07-15 20:20:22.439647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.439674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.291 [2024-07-15 20:20:22.439731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.439746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.291 [2024-07-15 20:20:22.439801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.439815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.291 [2024-07-15 20:20:22.439869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.439883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.291 #45 NEW cov: 12274 ft: 15457 corp: 39/804b lim: 30 exec/s: 45 rss: 72Mb L: 25/30 MS: 1 PersAutoDict- DE: "\013\000\000\000"- 00:06:30.291 [2024-07-15 20:20:22.479541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.479566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.291 [2024-07-15 20:20:22.479621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.479634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.291 [2024-07-15 20:20:22.479687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.291 [2024-07-15 20:20:22.479701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.291 #46 NEW cov: 12274 ft: 15461 corp: 40/824b lim: 30 exec/s: 23 rss: 72Mb L: 20/30 MS: 1 ShuffleBytes- 00:06:30.292 #46 DONE cov: 12274 ft: 15461 corp: 40/824b lim: 30 exec/s: 23 rss: 72Mb 00:06:30.292 ###### Recommended dictionary. ###### 00:06:30.292 "\013\000\000\000" # Uses: 1 00:06:30.292 "\000\000\000\006" # Uses: 0 00:06:30.292 ###### End of recommended dictionary. ###### 00:06:30.292 Done 46 runs in 2 second(s) 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:30.292 20:20:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:30.292 [2024-07-15 20:20:22.670020] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:30.292 [2024-07-15 20:20:22.670091] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319104 ] 00:06:30.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.552 [2024-07-15 20:20:22.853598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.552 [2024-07-15 20:20:22.919507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.811 [2024-07-15 20:20:22.978908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.811 [2024-07-15 20:20:22.995197] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:30.811 INFO: Running with entropic power schedule (0xFF, 100). 00:06:30.811 INFO: Seed: 1877986791 00:06:30.811 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:30.811 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:30.811 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:30.811 INFO: A corpus is not provided, starting from an empty corpus 00:06:30.811 #2 INITED exec/s: 0 rss: 63Mb 00:06:30.811 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:30.811 This may also happen if the target rejected all inputs we tried so far 00:06:30.811 [2024-07-15 20:20:23.040384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140023 cdw11:14001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.811 [2024-07-15 20:20:23.040415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.115 NEW_FUNC[1/697]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:31.115 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:31.115 #4 NEW cov: 11957 ft: 11956 corp: 2/9b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:31.115 [2024-07-15 20:20:23.371273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.115 [2024-07-15 20:20:23.371304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.115 #14 NEW cov: 12070 ft: 12595 corp: 3/22b lim: 35 exec/s: 0 rss: 70Mb L: 13/13 MS: 5 InsertByte-ChangeByte-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:06:31.115 [2024-07-15 20:20:23.411305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.115 [2024-07-15 20:20:23.411330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.115 #15 NEW cov: 12076 ft: 12813 corp: 4/35b lim: 35 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ShuffleBytes- 00:06:31.115 [2024-07-15 20:20:23.461413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d91a00ab cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.115 [2024-07-15 20:20:23.461438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.115 #17 NEW cov: 12161 ft: 13080 corp: 5/44b lim: 35 exec/s: 0 rss: 70Mb L: 9/13 MS: 2 ShuffleBytes-CMP- DE: "\253\331\032\006\273?+\000"- 00:06:31.373 [2024-07-15 20:20:23.501564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:83830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.501591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.373 #18 NEW cov: 12161 ft: 13145 corp: 6/55b lim: 35 exec/s: 0 rss: 70Mb L: 11/13 MS: 1 EraseBytes- 00:06:31.373 [2024-07-15 20:20:23.541635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140023 cdw11:14001440 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.541660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.373 #19 NEW cov: 12161 ft: 13242 corp: 7/64b lim: 35 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 InsertByte- 00:06:31.373 [2024-07-15 20:20:23.591772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:23140023 cdw11:14001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.591797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.373 #20 NEW cov: 12161 ft: 13339 corp: 8/72b lim: 35 exec/s: 0 rss: 70Mb L: 8/13 MS: 1 CopyPart- 00:06:31.373 [2024-07-15 20:20:23.631842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140040 cdw11:23001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.631869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.373 #21 NEW cov: 12161 ft: 13356 corp: 9/81b lim: 35 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 ShuffleBytes- 00:06:31.373 [2024-07-15 20:20:23.682005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:231400ab cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.682030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.373 #22 NEW cov: 12161 ft: 13422 corp: 10/90b lim: 35 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 CrossOver- 00:06:31.373 [2024-07-15 20:20:23.732286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:83830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.732311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.373 [2024-07-15 20:20:23.732368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:83830083 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.373 [2024-07-15 20:20:23.732382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.633 #28 NEW cov: 12161 ft: 13847 corp: 11/107b lim: 35 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 CopyPart- 00:06:31.633 [2024-07-15 20:20:23.782326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140040 cdw11:14001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.782351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.633 #29 NEW cov: 12161 ft: 13866 corp: 12/114b lim: 35 exec/s: 0 rss: 71Mb L: 7/17 MS: 1 EraseBytes- 00:06:31.633 [2024-07-15 20:20:23.832725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.832751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.832808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:83ab0083 cdw11:0600d91a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.832822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.832879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2b00003f cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.832892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.633 #30 NEW cov: 12161 ft: 14114 corp: 13/135b lim: 35 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 PersAutoDict- DE: "\253\331\032\006\273?+\000"- 00:06:31.633 [2024-07-15 20:20:23.872806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:2d008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.872832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.872891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2d2d002d cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.872905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.872959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d91a00ab cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.872973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.633 #31 NEW cov: 12161 ft: 14147 corp: 14/161b lim: 35 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:31.633 [2024-07-15 20:20:23.922682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140023 cdw11:14001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.922706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.633 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:31.633 #32 NEW cov: 12184 ft: 14209 corp: 15/169b lim: 35 exec/s: 0 rss: 71Mb L: 8/26 MS: 1 CopyPart- 00:06:31.633 [2024-07-15 20:20:23.962695] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.633 [2024-07-15 20:20:23.962833] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.633 [2024-07-15 20:20:23.962939] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.633 [2024-07-15 20:20:23.963046] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.633 [2024-07-15 20:20:23.963251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.963279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.963337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.963353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.963407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.963421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.633 [2024-07-15 20:20:23.963475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:1a00abd9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:23.963491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.633 #33 NEW cov: 12193 ft: 14788 corp: 16/202b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:31.633 [2024-07-15 20:20:24.002890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:19140003 cdw11:14008714 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.633 [2024-07-15 20:20:24.002917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.892 #37 NEW cov: 12200 ft: 14843 corp: 17/209b lim: 35 exec/s: 37 rss: 71Mb L: 7/33 MS: 4 EraseBytes-ChangeBit-ChangeBinInt-InsertByte- 00:06:31.892 [2024-07-15 20:20:24.053019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140014 cdw11:23001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.892 [2024-07-15 20:20:24.053045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.892 #38 NEW cov: 12200 ft: 14886 corp: 18/217b lim: 35 exec/s: 38 rss: 71Mb L: 8/33 MS: 1 ShuffleBytes- 00:06:31.892 [2024-07-15 20:20:24.103347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:83830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.892 [2024-07-15 20:20:24.103373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.892 [2024-07-15 20:20:24.103430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:83830083 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.892 [2024-07-15 20:20:24.103447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.892 #39 NEW cov: 12200 ft: 14904 corp: 19/235b lim: 35 exec/s: 39 rss: 71Mb L: 18/33 MS: 1 CrossOver- 00:06:31.892 [2024-07-15 20:20:24.153497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3f3f003f cdw11:3f003f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.892 [2024-07-15 20:20:24.153522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.892 [2024-07-15 20:20:24.153577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3f3f003f cdw11:e7003fec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.892 [2024-07-15 20:20:24.153590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.892 #42 NEW cov: 12200 ft: 14921 corp: 20/250b lim: 35 exec/s: 42 rss: 71Mb L: 15/33 MS: 3 EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:06:31.892 [2024-07-15 20:20:24.203754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:2d008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.893 [2024-07-15 20:20:24.203778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.893 [2024-07-15 20:20:24.203833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2d2d002d cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.893 [2024-07-15 20:20:24.203847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.893 [2024-07-15 20:20:24.203917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d91a0003 cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.893 [2024-07-15 20:20:24.203933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.893 #43 NEW cov: 12200 ft: 14945 corp: 21/276b lim: 35 exec/s: 43 rss: 72Mb L: 26/33 MS: 1 CMP- DE: "\000\003"- 00:06:31.893 [2024-07-15 20:20:24.253433] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:31.893 [2024-07-15 20:20:24.253649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.893 [2024-07-15 20:20:24.253676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.893 #44 NEW cov: 12200 ft: 14997 corp: 22/284b lim: 35 exec/s: 44 rss: 72Mb L: 8/33 MS: 1 ChangeBinInt- 00:06:32.151 [2024-07-15 20:20:24.293720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.151 [2024-07-15 20:20:24.293745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.151 #45 NEW cov: 12200 ft: 15105 corp: 23/297b lim: 35 exec/s: 45 rss: 72Mb L: 13/33 MS: 1 ChangeByte- 00:06:32.151 [2024-07-15 20:20:24.333731] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.151 [2024-07-15 20:20:24.333850] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.151 [2024-07-15 20:20:24.333958] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.151 [2024-07-15 20:20:24.334176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.151 [2024-07-15 20:20:24.334202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.151 [2024-07-15 20:20:24.334260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.334275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.152 [2024-07-15 20:20:24.334331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.334347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.152 #47 NEW cov: 12200 ft: 15158 corp: 24/320b lim: 35 exec/s: 47 rss: 72Mb L: 23/33 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:32.152 [2024-07-15 20:20:24.373955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:14140023 cdw11:14001440 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.373980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.152 #48 NEW cov: 12200 ft: 15179 corp: 25/329b lim: 35 exec/s: 48 rss: 72Mb L: 9/33 MS: 1 PersAutoDict- DE: "\000\003"- 00:06:32.152 [2024-07-15 20:20:24.414381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:2d008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.414405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.152 [2024-07-15 20:20:24.414477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2d2d002d cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.414491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.152 [2024-07-15 20:20:24.414545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d9200003 cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.414560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.152 #49 NEW cov: 12200 ft: 15184 corp: 26/355b lim: 35 exec/s: 49 rss: 72Mb L: 26/33 MS: 1 ChangeBinInt- 00:06:32.152 [2024-07-15 20:20:24.464257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:23140023 cdw11:b6001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.464281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.152 #50 NEW cov: 12200 ft: 15204 corp: 27/363b lim: 35 exec/s: 50 rss: 72Mb L: 8/33 MS: 1 ChangeByte- 00:06:32.152 [2024-07-15 20:20:24.504352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3f3f003f cdw11:3f003f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.152 [2024-07-15 20:20:24.504376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.413 #51 NEW cov: 12200 ft: 15216 corp: 28/376b lim: 35 exec/s: 51 rss: 72Mb L: 13/33 MS: 1 EraseBytes- 00:06:32.413 [2024-07-15 20:20:24.554516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:23140023 cdw11:00001401 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.413 [2024-07-15 20:20:24.554541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.413 #52 NEW cov: 12200 ft: 15230 corp: 29/384b lim: 35 exec/s: 52 rss: 72Mb L: 8/33 MS: 1 CMP- DE: "\001\000\000\177"- 00:06:32.413 [2024-07-15 20:20:24.604538] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.413 [2024-07-15 20:20:24.605038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00c00000 cdw11:c000c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.413 [2024-07-15 20:20:24.605064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.413 [2024-07-15 20:20:24.605121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c0c000c0 cdw11:c000c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.413 [2024-07-15 20:20:24.605134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.413 [2024-07-15 20:20:24.605202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:c0c000c0 cdw11:c000c0c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.413 [2024-07-15 20:20:24.605216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.413 [2024-07-15 20:20:24.605274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c0c000c0 cdw11:0000c000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.413 [2024-07-15 20:20:24.605288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.413 #53 NEW cov: 12200 ft: 15246 corp: 30/415b lim: 35 exec/s: 53 rss: 72Mb L: 31/33 MS: 1 InsertRepeatedBytes- 00:06:32.413 [2024-07-15 20:20:24.654783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.413 [2024-07-15 20:20:24.654806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.413 #54 NEW cov: 12200 ft: 15260 corp: 31/425b lim: 35 exec/s: 54 rss: 72Mb L: 10/33 MS: 1 EraseBytes- 00:06:32.414 [2024-07-15 20:20:24.684998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3f3f003f cdw11:3f003f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.414 [2024-07-15 20:20:24.685024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.414 [2024-07-15 20:20:24.685079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ece7003f cdw11:3f003f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.414 [2024-07-15 20:20:24.685093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.414 #55 NEW cov: 12200 ft: 15296 corp: 32/440b lim: 35 exec/s: 55 rss: 72Mb L: 15/33 MS: 1 ShuffleBytes- 00:06:32.414 [2024-07-15 20:20:24.724937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3f3f003f cdw11:3f003f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.414 [2024-07-15 20:20:24.724962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.414 #56 NEW cov: 12200 ft: 15324 corp: 33/453b lim: 35 exec/s: 56 rss: 72Mb L: 13/33 MS: 1 ChangeBit- 00:06:32.414 [2024-07-15 20:20:24.775346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a830022 cdw11:c3008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.414 [2024-07-15 20:20:24.775371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.414 [2024-07-15 20:20:24.775428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:83ab0083 cdw11:0600d91a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.414 [2024-07-15 20:20:24.775445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.414 [2024-07-15 20:20:24.775499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2b00003f cdw11:83008383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.414 [2024-07-15 20:20:24.775513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.783 #57 NEW cov: 12200 ft: 15335 corp: 34/474b lim: 35 exec/s: 57 rss: 72Mb L: 21/33 MS: 1 ChangeBit- 00:06:32.783 [2024-07-15 20:20:24.815299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:baba0023 cdw11:ba00baba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.815324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.783 [2024-07-15 20:20:24.815380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:14140014 cdw11:14001414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.815393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.783 #58 NEW cov: 12200 ft: 15360 corp: 35/488b lim: 35 exec/s: 58 rss: 72Mb L: 14/33 MS: 1 InsertRepeatedBytes- 00:06:32.783 [2024-07-15 20:20:24.855271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d91a00ab cdw11:34003434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.855295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.783 #59 NEW cov: 12200 ft: 15374 corp: 36/501b lim: 35 exec/s: 59 rss: 72Mb L: 13/33 MS: 1 InsertRepeatedBytes- 00:06:32.783 [2024-07-15 20:20:24.895322] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.783 [2024-07-15 20:20:24.895538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d91a00ab cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.895562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.783 [2024-07-15 20:20:24.895619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d91a0000 cdw11:3f0006bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.895633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.783 #60 NEW cov: 12200 ft: 15387 corp: 37/517b lim: 35 exec/s: 60 rss: 72Mb L: 16/33 MS: 1 CopyPart- 00:06:32.783 [2024-07-15 20:20:24.935554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:06bb0023 cdw11:23003f2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.935579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.783 #61 NEW cov: 12200 ft: 15396 corp: 38/529b lim: 35 exec/s: 61 rss: 72Mb L: 12/33 MS: 1 CrossOver- 00:06:32.783 [2024-07-15 20:20:24.975613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:83010022 cdw11:7f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:24.975638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.783 #62 NEW cov: 12200 ft: 15439 corp: 39/540b lim: 35 exec/s: 62 rss: 72Mb L: 11/33 MS: 1 PersAutoDict- DE: "\001\000\000\177"- 00:06:32.783 [2024-07-15 20:20:25.015601] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.783 [2024-07-15 20:20:25.015719] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.783 [2024-07-15 20:20:25.015829] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.783 [2024-07-15 20:20:25.016038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:25.016065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.783 [2024-07-15 20:20:25.016122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:25.016139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.783 [2024-07-15 20:20:25.016199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:20000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.783 [2024-07-15 20:20:25.016215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.783 #63 NEW cov: 12200 ft: 15446 corp: 40/563b lim: 35 exec/s: 31 rss: 72Mb L: 23/33 MS: 1 ChangeBit- 00:06:32.783 #63 DONE cov: 12200 ft: 15446 corp: 40/563b lim: 35 exec/s: 31 rss: 72Mb 00:06:32.783 ###### Recommended dictionary. ###### 00:06:32.783 "\253\331\032\006\273?+\000" # Uses: 1 00:06:32.783 "\000\003" # Uses: 1 00:06:32.783 "\001\000\000\177" # Uses: 1 00:06:32.783 ###### End of recommended dictionary. ###### 00:06:32.783 Done 63 runs in 2 second(s) 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.043 20:20:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:33.043 [2024-07-15 20:20:25.218199] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:33.043 [2024-07-15 20:20:25.218271] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319404 ] 00:06:33.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.043 [2024-07-15 20:20:25.399703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.301 [2024-07-15 20:20:25.471038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.301 [2024-07-15 20:20:25.530251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.301 [2024-07-15 20:20:25.546548] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:33.301 INFO: Running with entropic power schedule (0xFF, 100). 00:06:33.301 INFO: Seed: 133053222 00:06:33.301 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:33.301 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:33.301 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:33.301 INFO: A corpus is not provided, starting from an empty corpus 00:06:33.301 #2 INITED exec/s: 0 rss: 64Mb 00:06:33.301 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:33.301 This may also happen if the target rejected all inputs we tried so far 00:06:33.301 [2024-07-15 20:20:25.605544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.301 [2024-07-15 20:20:25.605574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.559 NEW_FUNC[1/705]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:33.559 NEW_FUNC[2/705]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:33.559 #3 NEW cov: 12190 ft: 12188 corp: 2/21b lim: 20 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:33.559 [2024-07-15 20:20:25.926513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.559 [2024-07-15 20:20:25.926567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.817 NEW_FUNC[1/1]: 0x1aabb40 in spdk_sock_recv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:461 00:06:33.817 #4 NEW cov: 12317 ft: 12884 corp: 3/41b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 CMP- DE: "\037\000\000\000"- 00:06:33.817 #6 NEW cov: 12323 ft: 13265 corp: 4/61b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:33.817 #7 NEW cov: 12408 ft: 13481 corp: 5/81b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 CrossOver- 00:06:33.817 #10 NEW cov: 12413 ft: 13941 corp: 6/89b lim: 20 exec/s: 0 rss: 71Mb L: 8/20 MS: 3 CopyPart-ChangeByte-CrossOver- 00:06:33.817 #11 NEW cov: 12413 ft: 14092 corp: 7/109b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 PersAutoDict- DE: "\037\000\000\000"- 00:06:33.817 #12 NEW cov: 12413 ft: 14145 corp: 8/119b lim: 20 exec/s: 0 rss: 71Mb L: 10/20 MS: 1 EraseBytes- 00:06:34.076 [2024-07-15 20:20:26.207096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.076 [2024-07-15 20:20:26.207125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.076 #13 NEW cov: 12413 ft: 14261 corp: 9/139b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 ChangeBit- 00:06:34.076 [2024-07-15 20:20:26.256864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.076 [2024-07-15 20:20:26.256891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.076 #15 NEW cov: 12417 ft: 14432 corp: 10/151b lim: 20 exec/s: 0 rss: 71Mb L: 12/20 MS: 2 CopyPart-CrossOver- 00:06:34.076 [2024-07-15 20:20:26.297163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.076 [2024-07-15 20:20:26.297187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.076 #16 NEW cov: 12417 ft: 14489 corp: 11/171b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\005"- 00:06:34.076 #17 NEW cov: 12417 ft: 14494 corp: 12/191b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 ChangeByte- 00:06:34.076 #18 NEW cov: 12417 ft: 14535 corp: 13/211b lim: 20 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:34.076 #19 NEW cov: 12417 ft: 14547 corp: 14/219b lim: 20 exec/s: 0 rss: 71Mb L: 8/20 MS: 1 ChangeBit- 00:06:34.335 [2024-07-15 20:20:26.467529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.335 [2024-07-15 20:20:26.467555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.335 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:34.335 #20 NEW cov: 12441 ft: 14619 corp: 15/235b lim: 20 exec/s: 0 rss: 72Mb L: 16/20 MS: 1 CrossOver- 00:06:34.335 #21 NEW cov: 12441 ft: 14697 corp: 16/255b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 CMP- DE: "\000\000\000\015"- 00:06:34.335 #23 NEW cov: 12441 ft: 14710 corp: 17/264b lim: 20 exec/s: 23 rss: 72Mb L: 9/20 MS: 2 CrossOver-PersAutoDict- DE: "\377\377\377\377\377\377\377\005"- 00:06:34.335 #24 NEW cov: 12441 ft: 14735 corp: 18/284b lim: 20 exec/s: 24 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:06:34.335 #25 NEW cov: 12441 ft: 14743 corp: 19/304b lim: 20 exec/s: 25 rss: 72Mb L: 20/20 MS: 1 ChangeByte- 00:06:34.335 #26 NEW cov: 12441 ft: 14784 corp: 20/324b lim: 20 exec/s: 26 rss: 72Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:34.594 [2024-07-15 20:20:26.738197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.594 [2024-07-15 20:20:26.738225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.594 #27 NEW cov: 12441 ft: 14794 corp: 21/339b lim: 20 exec/s: 27 rss: 72Mb L: 15/20 MS: 1 EraseBytes- 00:06:34.594 [2024-07-15 20:20:26.788658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.594 [2024-07-15 20:20:26.788684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.594 #28 NEW cov: 12441 ft: 14822 corp: 22/359b lim: 20 exec/s: 28 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:06:34.594 #29 NEW cov: 12441 ft: 14839 corp: 23/379b lim: 20 exec/s: 29 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:06:34.594 #30 NEW cov: 12441 ft: 15052 corp: 24/399b lim: 20 exec/s: 30 rss: 72Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:34.594 #31 NEW cov: 12441 ft: 15079 corp: 25/411b lim: 20 exec/s: 31 rss: 72Mb L: 12/20 MS: 1 EraseBytes- 00:06:34.594 [2024-07-15 20:20:26.959163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.594 [2024-07-15 20:20:26.959188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.853 #32 NEW cov: 12441 ft: 15098 corp: 26/431b lim: 20 exec/s: 32 rss: 72Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:34.853 #33 NEW cov: 12441 ft: 15111 corp: 27/451b lim: 20 exec/s: 33 rss: 72Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:34.853 [2024-07-15 20:20:27.039294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.853 [2024-07-15 20:20:27.039320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.853 #34 NEW cov: 12441 ft: 15124 corp: 28/471b lim: 20 exec/s: 34 rss: 72Mb L: 20/20 MS: 1 CrossOver- 00:06:34.853 [2024-07-15 20:20:27.089311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.853 [2024-07-15 20:20:27.089337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.853 #35 NEW cov: 12441 ft: 15141 corp: 29/487b lim: 20 exec/s: 35 rss: 72Mb L: 16/20 MS: 1 InsertByte- 00:06:34.853 [2024-07-15 20:20:27.139402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.853 [2024-07-15 20:20:27.139429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.853 #36 NEW cov: 12441 ft: 15185 corp: 30/504b lim: 20 exec/s: 36 rss: 72Mb L: 17/20 MS: 1 InsertByte- 00:06:34.853 [2024-07-15 20:20:27.189793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.853 [2024-07-15 20:20:27.189819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.853 #37 NEW cov: 12441 ft: 15223 corp: 31/524b lim: 20 exec/s: 37 rss: 72Mb L: 20/20 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\005"- 00:06:35.112 [2024-07-15 20:20:27.239666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.112 [2024-07-15 20:20:27.239693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.112 #38 NEW cov: 12441 ft: 15243 corp: 32/541b lim: 20 exec/s: 38 rss: 73Mb L: 17/20 MS: 1 ChangeBinInt- 00:06:35.112 #39 NEW cov: 12441 ft: 15258 corp: 33/561b lim: 20 exec/s: 39 rss: 73Mb L: 20/20 MS: 1 ShuffleBytes- 00:06:35.112 [2024-07-15 20:20:27.340184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.112 [2024-07-15 20:20:27.340214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.112 #40 NEW cov: 12441 ft: 15285 corp: 34/581b lim: 20 exec/s: 40 rss: 73Mb L: 20/20 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:35.112 [2024-07-15 20:20:27.380026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.112 [2024-07-15 20:20:27.380050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.112 #41 NEW cov: 12441 ft: 15327 corp: 35/597b lim: 20 exec/s: 41 rss: 73Mb L: 16/20 MS: 1 ChangeBit- 00:06:35.112 #42 NEW cov: 12441 ft: 15579 corp: 36/603b lim: 20 exec/s: 42 rss: 73Mb L: 6/20 MS: 1 CrossOver- 00:06:35.112 #43 NEW cov: 12441 ft: 15591 corp: 37/623b lim: 20 exec/s: 43 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:06:35.371 #44 NEW cov: 12441 ft: 15592 corp: 38/636b lim: 20 exec/s: 44 rss: 73Mb L: 13/20 MS: 1 EraseBytes- 00:06:35.371 #45 NEW cov: 12441 ft: 15603 corp: 39/656b lim: 20 exec/s: 45 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:06:35.371 [2024-07-15 20:20:27.590760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.371 [2024-07-15 20:20:27.590786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.371 #46 NEW cov: 12441 ft: 15639 corp: 40/672b lim: 20 exec/s: 23 rss: 73Mb L: 16/20 MS: 1 CopyPart- 00:06:35.371 #46 DONE cov: 12441 ft: 15639 corp: 40/672b lim: 20 exec/s: 23 rss: 73Mb 00:06:35.371 ###### Recommended dictionary. ###### 00:06:35.371 "\037\000\000\000" # Uses: 1 00:06:35.371 "\377\377\377\377\377\377\377\005" # Uses: 2 00:06:35.371 "\000\000\000\015" # Uses: 0 00:06:35.371 "\000\000\000\000\000\000\000\000" # Uses: 0 00:06:35.371 ###### End of recommended dictionary. ###### 00:06:35.371 Done 46 runs in 2 second(s) 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:35.371 20:20:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:35.630 [2024-07-15 20:20:27.776876] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:35.630 [2024-07-15 20:20:27.776947] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319930 ] 00:06:35.630 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.630 [2024-07-15 20:20:27.954094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.889 [2024-07-15 20:20:28.019218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.889 [2024-07-15 20:20:28.078426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.889 [2024-07-15 20:20:28.094717] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:35.889 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.889 INFO: Seed: 2684022981 00:06:35.889 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:35.889 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:35.889 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:35.890 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.890 #2 INITED exec/s: 0 rss: 63Mb 00:06:35.890 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:35.890 This may also happen if the target rejected all inputs we tried so far 00:06:35.890 [2024-07-15 20:20:28.150461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.890 [2024-07-15 20:20:28.150489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.890 [2024-07-15 20:20:28.150545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.890 [2024-07-15 20:20:28.150559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.890 [2024-07-15 20:20:28.150615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.890 [2024-07-15 20:20:28.150629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.890 [2024-07-15 20:20:28.150681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.890 [2024-07-15 20:20:28.150694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.148 NEW_FUNC[1/693]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:36.148 NEW_FUNC[2/693]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:36.148 #11 NEW cov: 11934 ft: 11932 corp: 2/35b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 4 CopyPart-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:36.148 [2024-07-15 20:20:28.471446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.148 [2024-07-15 20:20:28.471479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.148 [2024-07-15 20:20:28.471549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.148 [2024-07-15 20:20:28.471564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.148 [2024-07-15 20:20:28.471619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.148 [2024-07-15 20:20:28.471633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.148 [2024-07-15 20:20:28.471685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.148 [2024-07-15 20:20:28.471708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.148 [2024-07-15 20:20:28.471761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:c0c0c0c0 cdw11:45c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.148 [2024-07-15 20:20:28.471774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.148 NEW_FUNC[1/5]: 0x17a6ec0 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:757 00:06:36.148 NEW_FUNC[2/5]: 0x180b310 in nvme_transport_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:625 00:06:36.148 #12 NEW cov: 12091 ft: 12611 corp: 3/70b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertByte- 00:06:36.407 [2024-07-15 20:20:28.530835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.530862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.407 #13 NEW cov: 12097 ft: 13754 corp: 4/82b lim: 35 exec/s: 0 rss: 70Mb L: 12/35 MS: 1 InsertRepeatedBytes- 00:06:36.407 [2024-07-15 20:20:28.571568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.571594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.571661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.571675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.571729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.571742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.571795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.571809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.571871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ff3fffff cdw11:3fee0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.571884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.407 #18 NEW cov: 12182 ft: 14109 corp: 5/117b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 5 ChangeBinInt-ChangeBit-InsertByte-CopyPart-InsertRepeatedBytes- 00:06:36.407 [2024-07-15 20:20:28.611668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.611693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.611776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.611790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.611842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0400003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.611856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.611907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.611920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.407 [2024-07-15 20:20:28.611971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:c0c0c0c0 cdw11:45c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.407 [2024-07-15 20:20:28.611985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.407 #19 NEW cov: 12182 ft: 14207 corp: 6/152b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:06:36.408 [2024-07-15 20:20:28.661670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.661695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.661747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.661761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.661812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.661826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.661877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.661890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.408 #20 NEW cov: 12182 ft: 14258 corp: 7/186b lim: 35 exec/s: 0 rss: 70Mb L: 34/35 MS: 1 ChangeBit- 00:06:36.408 [2024-07-15 20:20:28.701782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3f3fc040 cdw11:3f3f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.701807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.701862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c03f40 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.701876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.701929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.701943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.701999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.702012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.408 #21 NEW cov: 12182 ft: 14311 corp: 8/220b lim: 35 exec/s: 0 rss: 71Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:36.408 [2024-07-15 20:20:28.752044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.752069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.752139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.752152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.752205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0400003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.752218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.752271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.752286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.408 [2024-07-15 20:20:28.752338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:c0c0c0c0 cdw11:55c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.408 [2024-07-15 20:20:28.752351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.408 #22 NEW cov: 12182 ft: 14329 corp: 9/255b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:06:36.667 [2024-07-15 20:20:28.802035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3f3fc040 cdw11:3f3f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.802060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:28.802113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c03f40 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.802126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:28.802176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.802189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:28.802240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.802252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.667 #23 NEW cov: 12182 ft: 14383 corp: 10/289b lim: 35 exec/s: 0 rss: 71Mb L: 34/35 MS: 1 CopyPart- 00:06:36.667 [2024-07-15 20:20:28.851859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.851884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:28.851940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.851954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.667 #24 NEW cov: 12182 ft: 14644 corp: 11/308b lim: 35 exec/s: 0 rss: 71Mb L: 19/35 MS: 1 EraseBytes- 00:06:36.667 [2024-07-15 20:20:28.901999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.902023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:28.902076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.902089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.667 #25 NEW cov: 12182 ft: 14661 corp: 12/327b lim: 35 exec/s: 0 rss: 71Mb L: 19/35 MS: 1 ChangeBinInt- 00:06:36.667 [2024-07-15 20:20:28.951994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff030000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:28.952018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.667 #26 NEW cov: 12182 ft: 14677 corp: 13/339b lim: 35 exec/s: 0 rss: 71Mb L: 12/35 MS: 1 CMP- DE: "\377\003\000\000\000\000\000\000"- 00:06:36.667 [2024-07-15 20:20:29.002123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff29ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:29.002147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.667 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:36.667 #30 NEW cov: 12205 ft: 14717 corp: 14/352b lim: 35 exec/s: 0 rss: 71Mb L: 13/35 MS: 4 ChangeBit-ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:36.667 [2024-07-15 20:20:29.042691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3f3fc040 cdw11:3f3f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:29.042715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:29.042772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c03f40 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:29.042785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:29.042840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:29.042853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.667 [2024-07-15 20:20:29.042908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.667 [2024-07-15 20:20:29.042921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.926 #31 NEW cov: 12205 ft: 14740 corp: 15/385b lim: 35 exec/s: 0 rss: 71Mb L: 33/35 MS: 1 EraseBytes- 00:06:36.926 [2024-07-15 20:20:29.092349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ff03 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.092373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.926 #32 NEW cov: 12205 ft: 14773 corp: 16/397b lim: 35 exec/s: 0 rss: 71Mb L: 12/35 MS: 1 PersAutoDict- DE: "\377\003\000\000\000\000\000\000"- 00:06:36.926 [2024-07-15 20:20:29.143120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.143145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.143200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.143214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.143265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0400003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.143278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.143331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.143344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.143397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:c0c0c0c0 cdw11:55c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.143410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:36.926 #33 NEW cov: 12205 ft: 14788 corp: 17/432b lim: 35 exec/s: 33 rss: 71Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:36.926 [2024-07-15 20:20:29.182763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:030a030a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.182790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.182848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.182862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.926 #38 NEW cov: 12205 ft: 14818 corp: 18/451b lim: 35 exec/s: 38 rss: 71Mb L: 19/35 MS: 5 InsertByte-ShuffleBytes-CopyPart-CopyPart-InsertRepeatedBytes- 00:06:36.926 [2024-07-15 20:20:29.223160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3f3fc040 cdw11:3f3f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.223185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.223240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c03f40 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.223253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.223309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.223323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.926 [2024-07-15 20:20:29.223377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.223390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.926 #39 NEW cov: 12205 ft: 14898 corp: 19/485b lim: 35 exec/s: 39 rss: 71Mb L: 34/35 MS: 1 ChangeBit- 00:06:36.926 [2024-07-15 20:20:29.262813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff29ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.926 [2024-07-15 20:20:29.262837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.926 #40 NEW cov: 12205 ft: 14921 corp: 20/498b lim: 35 exec/s: 40 rss: 72Mb L: 13/35 MS: 1 ShuffleBytes- 00:06:37.185 [2024-07-15 20:20:29.313115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.185 [2024-07-15 20:20:29.313139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.313195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000c000 cdw11:00c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.313209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.186 #41 NEW cov: 12205 ft: 14940 corp: 21/517b lim: 35 exec/s: 41 rss: 72Mb L: 19/35 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:37.186 [2024-07-15 20:20:29.353069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffcf0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.353094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.186 #42 NEW cov: 12205 ft: 14964 corp: 22/530b lim: 35 exec/s: 42 rss: 72Mb L: 13/35 MS: 1 InsertByte- 00:06:37.186 [2024-07-15 20:20:29.393190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffcf0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.393214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.186 #43 NEW cov: 12205 ft: 15015 corp: 23/543b lim: 35 exec/s: 43 rss: 72Mb L: 13/35 MS: 1 ChangeBinInt- 00:06:37.186 [2024-07-15 20:20:29.443836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000c0c0 cdw11:00040003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.443860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.443914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.443927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.443982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.443996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.444050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.444063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.186 #44 NEW cov: 12205 ft: 15068 corp: 24/577b lim: 35 exec/s: 44 rss: 72Mb L: 34/35 MS: 1 CMP- DE: "\000\000\000\004"- 00:06:37.186 [2024-07-15 20:20:29.483949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.483973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.484034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.484048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.484100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.484113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.484165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.484178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.186 #45 NEW cov: 12205 ft: 15083 corp: 25/610b lim: 35 exec/s: 45 rss: 72Mb L: 33/35 MS: 1 EraseBytes- 00:06:37.186 [2024-07-15 20:20:29.523904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ff03 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.523928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.523983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:000000c0 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.523996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.186 [2024-07-15 20:20:29.524066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c055c0c0 cdw11:c0000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.186 [2024-07-15 20:20:29.524080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.186 #46 NEW cov: 12205 ft: 15296 corp: 26/633b lim: 35 exec/s: 46 rss: 72Mb L: 23/35 MS: 1 CrossOver- 00:06:37.445 [2024-07-15 20:20:29.573717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.573742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.445 #47 NEW cov: 12205 ft: 15310 corp: 27/643b lim: 35 exec/s: 47 rss: 72Mb L: 10/35 MS: 1 EraseBytes- 00:06:37.445 [2024-07-15 20:20:29.613976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.614001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.614053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.614066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.445 #48 NEW cov: 12205 ft: 15322 corp: 28/662b lim: 35 exec/s: 48 rss: 72Mb L: 19/35 MS: 1 CrossOver- 00:06:37.445 [2024-07-15 20:20:29.663977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ff03 cdw11:000c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.664001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.445 #49 NEW cov: 12205 ft: 15352 corp: 29/674b lim: 35 exec/s: 49 rss: 72Mb L: 12/35 MS: 1 ChangeBinInt- 00:06:37.445 [2024-07-15 20:20:29.704586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000c0c0 cdw11:00040003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.704613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.704667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.704680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.704732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0e0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.704745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.704798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.704811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.445 #50 NEW cov: 12205 ft: 15379 corp: 30/708b lim: 35 exec/s: 50 rss: 72Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:37.445 [2024-07-15 20:20:29.754875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.754900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.754954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.754967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.755017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.755030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.755079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:37c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.755091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.755143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:c0c0c0c0 cdw11:45c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.755156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.445 #51 NEW cov: 12205 ft: 15384 corp: 31/743b lim: 35 exec/s: 51 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:37.445 [2024-07-15 20:20:29.794523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.794549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.445 [2024-07-15 20:20:29.794603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.445 [2024-07-15 20:20:29.794616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.445 #52 NEW cov: 12205 ft: 15394 corp: 32/762b lim: 35 exec/s: 52 rss: 72Mb L: 19/35 MS: 1 CrossOver- 00:06:37.704 [2024-07-15 20:20:29.834803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.834832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.834888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.834902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.834953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c013c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.834966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.704 #53 NEW cov: 12205 ft: 15428 corp: 33/786b lim: 35 exec/s: 53 rss: 72Mb L: 24/35 MS: 1 CopyPart- 00:06:37.704 [2024-07-15 20:20:29.885263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.885289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.885340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.885354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.885406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.885419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.885471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.885484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.885533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:3fee0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.885547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.704 #54 NEW cov: 12205 ft: 15433 corp: 34/821b lim: 35 exec/s: 54 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:37.704 [2024-07-15 20:20:29.934763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff2affff cdw11:3fbe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.934788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.704 #55 NEW cov: 12205 ft: 15462 corp: 35/833b lim: 35 exec/s: 55 rss: 72Mb L: 12/35 MS: 1 CMP- DE: "\377*?\276\310\241\315\354"- 00:06:37.704 [2024-07-15 20:20:29.975326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:4444c044 cdw11:44440002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.704 [2024-07-15 20:20:29.975351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.704 [2024-07-15 20:20:29.975404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:29.975417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:29.975472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:29.975489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:29.975537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0550003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:29.975550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.705 #56 NEW cov: 12205 ft: 15479 corp: 36/862b lim: 35 exec/s: 56 rss: 72Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:06:37.705 [2024-07-15 20:20:30.015556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3f3fc040 cdw11:3f3f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.015581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:30.015636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c03f40 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.015650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:30.015704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e0c0c0e0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.015718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:30.015770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.015783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:30.015835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.015848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:37.705 #57 NEW cov: 12205 ft: 15589 corp: 37/897b lim: 35 exec/s: 57 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:37.705 [2024-07-15 20:20:30.065271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.065298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.705 [2024-07-15 20:20:30.065352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:c0c0c0c0 cdw11:c0c00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.705 [2024-07-15 20:20:30.065366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.964 #58 NEW cov: 12205 ft: 15611 corp: 38/915b lim: 35 exec/s: 58 rss: 72Mb L: 18/35 MS: 1 EraseBytes- 00:06:37.964 [2024-07-15 20:20:30.115390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff29ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.964 [2024-07-15 20:20:30.115417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.964 [2024-07-15 20:20:30.115467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.964 [2024-07-15 20:20:30.115481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.964 #59 NEW cov: 12205 ft: 15640 corp: 39/932b lim: 35 exec/s: 29 rss: 72Mb L: 17/35 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:06:37.964 #59 DONE cov: 12205 ft: 15640 corp: 39/932b lim: 35 exec/s: 29 rss: 72Mb 00:06:37.964 ###### Recommended dictionary. ###### 00:06:37.964 "\377\003\000\000\000\000\000\000" # Uses: 1 00:06:37.964 "\000\000\000\000" # Uses: 1 00:06:37.964 "\000\000\000\004" # Uses: 0 00:06:37.964 "\377*?\276\310\241\315\354" # Uses: 0 00:06:37.964 ###### End of recommended dictionary. ###### 00:06:37.964 Done 59 runs in 2 second(s) 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:37.964 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:37.965 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:37.965 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:37.965 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.965 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.965 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.965 20:20:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:37.965 [2024-07-15 20:20:30.310067] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:37.965 [2024-07-15 20:20:30.310138] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320433 ] 00:06:37.965 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.223 [2024-07-15 20:20:30.489489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.223 [2024-07-15 20:20:30.556641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.482 [2024-07-15 20:20:30.616332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.482 [2024-07-15 20:20:30.632635] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:38.482 INFO: Running with entropic power schedule (0xFF, 100). 00:06:38.482 INFO: Seed: 926056294 00:06:38.482 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:38.482 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:38.482 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:38.482 INFO: A corpus is not provided, starting from an empty corpus 00:06:38.482 #2 INITED exec/s: 0 rss: 64Mb 00:06:38.482 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:38.482 This may also happen if the target rejected all inputs we tried so far 00:06:38.482 [2024-07-15 20:20:30.699172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.482 [2024-07-15 20:20:30.699211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.482 [2024-07-15 20:20:30.699342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.482 [2024-07-15 20:20:30.699361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.741 NEW_FUNC[1/698]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:38.741 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:38.741 #10 NEW cov: 11989 ft: 11986 corp: 2/21b lim: 45 exec/s: 0 rss: 70Mb L: 20/20 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:38.741 [2024-07-15 20:20:31.039948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.741 [2024-07-15 20:20:31.039992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.741 [2024-07-15 20:20:31.040111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.741 [2024-07-15 20:20:31.040131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.741 #11 NEW cov: 12102 ft: 12711 corp: 3/46b lim: 45 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:38.741 [2024-07-15 20:20:31.100213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.741 [2024-07-15 20:20:31.100246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.741 [2024-07-15 20:20:31.100362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.741 [2024-07-15 20:20:31.100379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.741 [2024-07-15 20:20:31.100500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.741 [2024-07-15 20:20:31.100517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.000 #12 NEW cov: 12108 ft: 13140 corp: 4/81b lim: 45 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:39.000 [2024-07-15 20:20:31.150072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.150101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.150217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.150233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.000 #13 NEW cov: 12193 ft: 13426 corp: 5/106b lim: 45 exec/s: 0 rss: 70Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:39.000 [2024-07-15 20:20:31.190702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.190732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.190850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.190866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.190981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a6666 cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.190999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.191111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a668a8a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.191127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.000 #14 NEW cov: 12193 ft: 13868 corp: 6/145b lim: 45 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 CMP- DE: "\036\000\000\000"- 00:06:39.000 [2024-07-15 20:20:31.240610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.240636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.240756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.240775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.240891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.240911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.000 #15 NEW cov: 12193 ft: 13940 corp: 7/180b lim: 45 exec/s: 0 rss: 70Mb L: 35/39 MS: 1 PersAutoDict- DE: "\036\000\000\000"- 00:06:39.000 [2024-07-15 20:20:31.280990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.281016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.281132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1e00ffff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.281147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.281260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:6666ff66 cdw11:668a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.281277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.281387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.281403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.000 #21 NEW cov: 12193 ft: 13997 corp: 8/222b lim: 45 exec/s: 0 rss: 71Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:06:39.000 [2024-07-15 20:20:31.330876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.330907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.331033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.331051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.331172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.331199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.000 #22 NEW cov: 12193 ft: 14048 corp: 9/257b lim: 45 exec/s: 0 rss: 71Mb L: 35/42 MS: 1 ChangeByte- 00:06:39.000 [2024-07-15 20:20:31.380898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.380927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.381043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.381063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.381172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.381190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.000 [2024-07-15 20:20:31.381304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a88aa8a8 cdw11:8a660007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.000 [2024-07-15 20:20:31.381323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.259 #23 NEW cov: 12193 ft: 14150 corp: 10/298b lim: 45 exec/s: 0 rss: 71Mb L: 41/42 MS: 1 InsertRepeatedBytes- 00:06:39.259 [2024-07-15 20:20:31.431278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.259 [2024-07-15 20:20:31.431305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.260 [2024-07-15 20:20:31.431412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.431431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.260 [2024-07-15 20:20:31.431554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a6666 cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.431573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.260 [2024-07-15 20:20:31.431686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a668a82 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.431703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.260 #24 NEW cov: 12193 ft: 14230 corp: 11/337b lim: 45 exec/s: 0 rss: 71Mb L: 39/42 MS: 1 ChangeBit- 00:06:39.260 [2024-07-15 20:20:31.481042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:19000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.481074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.260 [2024-07-15 20:20:31.481196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.481215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.260 #25 NEW cov: 12193 ft: 14253 corp: 12/362b lim: 45 exec/s: 0 rss: 71Mb L: 25/42 MS: 1 ChangeBinInt- 00:06:39.260 [2024-07-15 20:20:31.530801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f4f4f4f4 cdw11:f4f40007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.530829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.260 [2024-07-15 20:20:31.530950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f4f4f4f4 cdw11:f4f40007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.530966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.260 #26 NEW cov: 12193 ft: 14329 corp: 13/381b lim: 45 exec/s: 0 rss: 71Mb L: 19/42 MS: 1 InsertRepeatedBytes- 00:06:39.260 [2024-07-15 20:20:31.581386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f4f4f4f4 cdw11:f4ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.581415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.260 [2024-07-15 20:20:31.581549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:66666666 cdw11:66ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.260 [2024-07-15 20:20:31.581575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.260 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:39.260 #27 NEW cov: 12216 ft: 14364 corp: 14/400b lim: 45 exec/s: 0 rss: 71Mb L: 19/42 MS: 1 CrossOver- 00:06:39.520 [2024-07-15 20:20:31.642352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.642381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.642502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.642520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.642640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.642656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.642769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a88aa8a8 cdw11:8a660007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.642786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.642910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffff0000 cdw11:ff750000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.642928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.520 #28 NEW cov: 12216 ft: 14450 corp: 15/445b lim: 45 exec/s: 28 rss: 71Mb L: 45/45 MS: 1 PersAutoDict- DE: "\036\000\000\000"- 00:06:39.520 [2024-07-15 20:20:31.701564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f4f42bf4 cdw11:f4f40007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.701592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.701716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f4f4f4f4 cdw11:f4f40007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.701732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.520 #29 NEW cov: 12216 ft: 14475 corp: 16/465b lim: 45 exec/s: 29 rss: 71Mb L: 20/45 MS: 1 InsertByte- 00:06:39.520 [2024-07-15 20:20:31.742287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.742318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.742429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.742451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.742568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.742586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.742700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.742718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.742840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.742857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.520 #30 NEW cov: 12216 ft: 14509 corp: 17/510b lim: 45 exec/s: 30 rss: 71Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:39.520 [2024-07-15 20:20:31.781951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.781981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.782099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:668affff cdw11:ff660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.782119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.782247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.782263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.520 #31 NEW cov: 12216 ft: 14598 corp: 18/545b lim: 45 exec/s: 31 rss: 71Mb L: 35/45 MS: 1 ShuffleBytes- 00:06:39.520 [2024-07-15 20:20:31.832672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.832702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.832822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ff1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.832842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.832960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a6666 cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.832979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.833099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a668a8a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.833116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.520 #32 NEW cov: 12216 ft: 14615 corp: 19/584b lim: 45 exec/s: 32 rss: 71Mb L: 39/45 MS: 1 PersAutoDict- DE: "\036\000\000\000"- 00:06:39.520 [2024-07-15 20:20:31.882898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.882927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.883047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.883065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.883177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.883197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.520 [2024-07-15 20:20:31.883311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.520 [2024-07-15 20:20:31.883326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.780 #33 NEW cov: 12216 ft: 14651 corp: 20/628b lim: 45 exec/s: 33 rss: 71Mb L: 44/45 MS: 1 CopyPart- 00:06:39.780 [2024-07-15 20:20:31.922575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.922603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.922736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.922755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.922872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.922891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.923008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.923026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.780 #34 NEW cov: 12216 ft: 14686 corp: 21/672b lim: 45 exec/s: 34 rss: 72Mb L: 44/45 MS: 1 ChangeBit- 00:06:39.780 [2024-07-15 20:20:31.973302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.973328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.973435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.973456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.973578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff8aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.973596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.973713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.973730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:31.973851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:31.973870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.780 #35 NEW cov: 12216 ft: 14700 corp: 22/717b lim: 45 exec/s: 35 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:06:39.780 [2024-07-15 20:20:32.023188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.023216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.023337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.023353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.023470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.023487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.023593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:a88aa8a8 cdw11:8a660007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.023609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.780 #36 NEW cov: 12216 ft: 14711 corp: 23/758b lim: 45 exec/s: 36 rss: 72Mb L: 41/45 MS: 1 ShuffleBytes- 00:06:39.780 [2024-07-15 20:20:32.062494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.062520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.062642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.062660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.780 #37 NEW cov: 12216 ft: 14735 corp: 24/783b lim: 45 exec/s: 37 rss: 72Mb L: 25/45 MS: 1 EraseBytes- 00:06:39.780 [2024-07-15 20:20:32.103112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0ab7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.103140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.103261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:668affff cdw11:ff660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.103288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.103409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.103426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.780 #38 NEW cov: 12216 ft: 14757 corp: 25/818b lim: 45 exec/s: 38 rss: 72Mb L: 35/45 MS: 1 ChangeByte- 00:06:39.780 [2024-07-15 20:20:32.153743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.153769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.153892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.153911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.154031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:43434343 cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.154047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.154163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.154179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.780 [2024-07-15 20:20:32.154296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffff66ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.780 [2024-07-15 20:20:32.154312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.039 #39 NEW cov: 12216 ft: 14760 corp: 26/863b lim: 45 exec/s: 39 rss: 72Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:40.039 [2024-07-15 20:20:32.203882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.203908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.204025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.204044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.204168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff8aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.204184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.204301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.204319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.204438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.204457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.039 #40 NEW cov: 12216 ft: 14765 corp: 27/908b lim: 45 exec/s: 40 rss: 72Mb L: 45/45 MS: 1 ChangeBit- 00:06:40.039 [2024-07-15 20:20:32.253777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.253803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.253926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:1effffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.253941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.254069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.254085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.254202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffff66ff cdw11:ff1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.254219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.039 #41 NEW cov: 12216 ft: 14772 corp: 28/944b lim: 45 exec/s: 41 rss: 72Mb L: 36/45 MS: 1 InsertByte- 00:06:40.039 [2024-07-15 20:20:32.293608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ff4f0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.293635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.293761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.293778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.293898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.293914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.039 #42 NEW cov: 12216 ft: 14777 corp: 29/979b lim: 45 exec/s: 42 rss: 72Mb L: 35/45 MS: 1 ChangeByte- 00:06:40.039 [2024-07-15 20:20:32.334005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.334032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.334154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.334172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.334286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a6666 cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.334303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.334421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:62626262 cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.334439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.039 #43 NEW cov: 12216 ft: 14791 corp: 30/1023b lim: 45 exec/s: 43 rss: 72Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:06:40.039 [2024-07-15 20:20:32.374574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:92920aff cdw11:92920004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.374604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.374737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff9292 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.374753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.374871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:1effffff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.374899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.375012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.375030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.039 [2024-07-15 20:20:32.375147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffff66ff cdw11:ff1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.039 [2024-07-15 20:20:32.375165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.039 #44 NEW cov: 12216 ft: 14834 corp: 31/1068b lim: 45 exec/s: 44 rss: 72Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:06:40.298 [2024-07-15 20:20:32.423771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.423800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.298 [2024-07-15 20:20:32.423916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66bfff cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.423934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.298 #45 NEW cov: 12216 ft: 14840 corp: 32/1093b lim: 45 exec/s: 45 rss: 72Mb L: 25/45 MS: 1 ChangeBit- 00:06:40.298 [2024-07-15 20:20:32.464675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:92920aff cdw11:92920004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.464701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.298 [2024-07-15 20:20:32.464815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff9292 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.464831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.298 [2024-07-15 20:20:32.464961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8aff8a cdw11:66660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.464977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.298 [2024-07-15 20:20:32.465095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.465111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.298 [2024-07-15 20:20:32.465224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffff66ff cdw11:ff1e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.465244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.298 #46 NEW cov: 12216 ft: 14901 corp: 33/1138b lim: 45 exec/s: 46 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:06:40.298 [2024-07-15 20:20:32.514821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.298 [2024-07-15 20:20:32.514849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.514965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ff66ffff cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.514982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.515102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff8aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.515119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.515237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.515253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.515376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.515392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.299 #47 NEW cov: 12216 ft: 14911 corp: 34/1183b lim: 45 exec/s: 47 rss: 72Mb L: 45/45 MS: 1 ChangeBinInt- 00:06:40.299 [2024-07-15 20:20:32.554111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f4f42bf4 cdw11:f4130007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.554137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.554255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f4f4f4f4 cdw11:f4f40007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.554273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.299 #48 NEW cov: 12216 ft: 14951 corp: 35/1204b lim: 45 exec/s: 48 rss: 72Mb L: 21/45 MS: 1 InsertByte- 00:06:40.299 [2024-07-15 20:20:32.604538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0ab7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.604567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.604683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:668affff cdw11:ff660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.604700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.604815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.604831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.299 #49 NEW cov: 12216 ft: 14973 corp: 36/1239b lim: 45 exec/s: 49 rss: 73Mb L: 35/45 MS: 1 ChangeBit- 00:06:40.299 [2024-07-15 20:20:32.654247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0ab7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.654274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.654398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:668affff cdw11:ff660003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.654416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.299 [2024-07-15 20:20:32.654534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.299 [2024-07-15 20:20:32.654550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.558 #50 NEW cov: 12216 ft: 14979 corp: 37/1274b lim: 45 exec/s: 25 rss: 73Mb L: 35/45 MS: 1 ChangeBinInt- 00:06:40.558 #50 DONE cov: 12216 ft: 14979 corp: 37/1274b lim: 45 exec/s: 25 rss: 73Mb 00:06:40.558 ###### Recommended dictionary. ###### 00:06:40.558 "\036\000\000\000" # Uses: 4 00:06:40.558 ###### End of recommended dictionary. ###### 00:06:40.558 Done 50 runs in 2 second(s) 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:40.558 20:20:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:40.558 [2024-07-15 20:20:32.855766] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:40.558 [2024-07-15 20:20:32.855855] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320745 ] 00:06:40.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.817 [2024-07-15 20:20:33.044371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.817 [2024-07-15 20:20:33.118262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.817 [2024-07-15 20:20:33.177959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.817 [2024-07-15 20:20:33.194243] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:41.075 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.075 INFO: Seed: 3488058749 00:06:41.075 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:41.075 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:41.075 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.075 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.075 #2 INITED exec/s: 0 rss: 64Mb 00:06:41.075 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.075 This may also happen if the target rejected all inputs we tried so far 00:06:41.075 [2024-07-15 20:20:33.260239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.075 [2024-07-15 20:20:33.260273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.333 NEW_FUNC[1/696]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:41.333 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:41.333 #4 NEW cov: 11906 ft: 11906 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 CrossOver-CrossOver- 00:06:41.333 [2024-07-15 20:20:33.601275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.333 [2024-07-15 20:20:33.601312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.333 #5 NEW cov: 12019 ft: 12611 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:41.333 [2024-07-15 20:20:33.662067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.333 [2024-07-15 20:20:33.662097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.333 [2024-07-15 20:20:33.662234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.333 [2024-07-15 20:20:33.662254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.333 [2024-07-15 20:20:33.662380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.333 [2024-07-15 20:20:33.662396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.333 [2024-07-15 20:20:33.662527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009b0a cdw11:00000000 00:06:41.333 [2024-07-15 20:20:33.662547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.333 #6 NEW cov: 12025 ft: 13061 corp: 4/14b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:41.590 [2024-07-15 20:20:33.722450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.722476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.722613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002b cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.722629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.722763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003fc1 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.722779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.722909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003934 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.722927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.723060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000772a cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.723078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.591 #7 NEW cov: 12110 ft: 13333 corp: 5/24b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CMP- DE: "\000+?\30194w*"- 00:06:41.591 [2024-07-15 20:20:33.771831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.771858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 #8 NEW cov: 12110 ft: 13447 corp: 6/26b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:41.591 [2024-07-15 20:20:33.821883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.821911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 #9 NEW cov: 12110 ft: 13722 corp: 7/28b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 CopyPart- 00:06:41.591 [2024-07-15 20:20:33.882865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.882895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.883034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b3f cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.883052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.883194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c139 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.883212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.883348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003477 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.883368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.591 #11 NEW cov: 12110 ft: 13784 corp: 8/37b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 2 ChangeBit-PersAutoDict- DE: "\000+?\30194w*"- 00:06:41.591 [2024-07-15 20:20:33.932888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a34 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.932915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.933043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b00 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.933061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.933183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c13f cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.933200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.591 [2024-07-15 20:20:33.933322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003977 cdw11:00000000 00:06:41.591 [2024-07-15 20:20:33.933339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.591 #12 NEW cov: 12110 ft: 13808 corp: 9/46b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:41.849 [2024-07-15 20:20:33.993128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.849 [2024-07-15 20:20:33.993157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.849 [2024-07-15 20:20:33.993286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.849 [2024-07-15 20:20:33.993303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.849 [2024-07-15 20:20:33.993438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.849 [2024-07-15 20:20:33.993459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.849 [2024-07-15 20:20:33.993607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009b0a cdw11:00000000 00:06:41.849 [2024-07-15 20:20:33.993625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.849 #13 NEW cov: 12110 ft: 13899 corp: 10/55b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:41.850 [2024-07-15 20:20:34.053301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.053327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.053464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.053494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.053615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b2b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.053631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.053753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009b0a cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.053770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.850 #14 NEW cov: 12110 ft: 13971 corp: 11/64b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 ChangeByte- 00:06:41.850 [2024-07-15 20:20:34.113698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000002b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.113730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.113852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003fc1 cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.113870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.113993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003934 cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.114009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.114137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000772a cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.114154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.114282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000772a cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.114300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.850 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:41.850 #15 NEW cov: 12133 ft: 14070 corp: 12/74b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000+?\30194w*"- 00:06:41.850 [2024-07-15 20:20:34.173790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.173819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.173953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.173972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.174096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.174114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.850 [2024-07-15 20:20:34.174242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.174261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.850 #16 NEW cov: 12133 ft: 14100 corp: 13/82b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 1 EraseBytes- 00:06:41.850 [2024-07-15 20:20:34.223103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:41.850 [2024-07-15 20:20:34.223131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.108 #17 NEW cov: 12133 ft: 14174 corp: 14/84b lim: 10 exec/s: 17 rss: 71Mb L: 2/10 MS: 1 ChangeBit- 00:06:42.108 [2024-07-15 20:20:34.274325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000559b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.274352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.274487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.274506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.274624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.274648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.274777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.274795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.274923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.274939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.108 #18 NEW cov: 12133 ft: 14264 corp: 15/94b lim: 10 exec/s: 18 rss: 71Mb L: 10/10 MS: 1 InsertByte- 00:06:42.108 [2024-07-15 20:20:34.323497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007a7a cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.323525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.108 #22 NEW cov: 12133 ft: 14284 corp: 16/96b lim: 10 exec/s: 22 rss: 71Mb L: 2/10 MS: 4 CrossOver-CopyPart-ChangeByte-CopyPart- 00:06:42.108 [2024-07-15 20:20:34.384316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a9b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.384343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.384469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.384487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.384615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.384632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.384757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.384774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.108 #23 NEW cov: 12133 ft: 14325 corp: 17/105b lim: 10 exec/s: 23 rss: 72Mb L: 9/10 MS: 1 CrossOver- 00:06:42.108 [2024-07-15 20:20:34.434528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.434556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.434694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b3f cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.434711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.434847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c139 cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.434865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.108 [2024-07-15 20:20:34.435000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003477 cdw11:00000000 00:06:42.108 [2024-07-15 20:20:34.435019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.108 #24 NEW cov: 12133 ft: 14333 corp: 18/114b lim: 10 exec/s: 24 rss: 72Mb L: 9/10 MS: 1 PersAutoDict- DE: "\000+?\30194w*"- 00:06:42.366 [2024-07-15 20:20:34.494044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7a cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.494074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.366 #25 NEW cov: 12133 ft: 14352 corp: 19/116b lim: 10 exec/s: 25 rss: 72Mb L: 2/10 MS: 1 CrossOver- 00:06:42.366 [2024-07-15 20:20:34.554230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000af1 cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.554261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.366 #26 NEW cov: 12133 ft: 14411 corp: 20/118b lim: 10 exec/s: 26 rss: 72Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:42.366 [2024-07-15 20:20:34.604803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a00 cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.604831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.604964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b3f cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.604985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.605121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003477 cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.605139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.366 #27 NEW cov: 12133 ft: 14618 corp: 21/125b lim: 10 exec/s: 27 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:06:42.366 [2024-07-15 20:20:34.665256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a9b cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.665286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.665418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.665435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.665567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.665584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.665705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.665723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.366 #28 NEW cov: 12133 ft: 14627 corp: 22/134b lim: 10 exec/s: 28 rss: 72Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:42.366 [2024-07-15 20:20:34.725429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.725461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.725587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.725606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.725731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b65 cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.725762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.366 [2024-07-15 20:20:34.725888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000640a cdw11:00000000 00:06:42.366 [2024-07-15 20:20:34.725905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.366 #29 NEW cov: 12133 ft: 14634 corp: 23/143b lim: 10 exec/s: 29 rss: 72Mb L: 9/10 MS: 1 ChangeBinInt- 00:06:42.624 [2024-07-15 20:20:34.775064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007a7a cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.775094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.775222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.775242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.624 #30 NEW cov: 12133 ft: 14793 corp: 24/148b lim: 10 exec/s: 30 rss: 72Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:06:42.624 [2024-07-15 20:20:34.835038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.835067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.624 #33 NEW cov: 12133 ft: 14799 corp: 25/150b lim: 10 exec/s: 33 rss: 72Mb L: 2/10 MS: 3 EraseBytes-CopyPart-CopyPart- 00:06:42.624 [2024-07-15 20:20:34.885765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.885794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.885928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003fc1 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.885949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.886079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003934 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.886098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.624 #34 NEW cov: 12133 ft: 14802 corp: 26/156b lim: 10 exec/s: 34 rss: 72Mb L: 6/10 MS: 1 CrossOver- 00:06:42.624 [2024-07-15 20:20:34.946147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a34 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.946176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.946317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002b00 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.946338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.946465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c13f cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.946486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.946608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003977 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.946626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.624 #35 NEW cov: 12133 ft: 14816 corp: 27/165b lim: 10 exec/s: 35 rss: 72Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:42.624 [2024-07-15 20:20:34.995726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.995757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.624 [2024-07-15 20:20:34.995882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:42.624 [2024-07-15 20:20:34.995912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.883 #36 NEW cov: 12133 ft: 14821 corp: 28/169b lim: 10 exec/s: 36 rss: 72Mb L: 4/10 MS: 1 InsertRepeatedBytes- 00:06:42.883 [2024-07-15 20:20:35.046398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000099b cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.046427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.883 [2024-07-15 20:20:35.046567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.046586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.883 [2024-07-15 20:20:35.046718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009b9b cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.046737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.883 [2024-07-15 20:20:35.046863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.046881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.883 #37 NEW cov: 12133 ft: 14856 corp: 29/177b lim: 10 exec/s: 37 rss: 72Mb L: 8/10 MS: 1 ChangeByte- 00:06:42.883 [2024-07-15 20:20:35.096095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.096122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.883 [2024-07-15 20:20:35.096244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000009b cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.096261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.883 #38 NEW cov: 12133 ft: 14870 corp: 30/182b lim: 10 exec/s: 38 rss: 72Mb L: 5/10 MS: 1 CrossOver- 00:06:42.883 [2024-07-15 20:20:35.156287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.156315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.883 [2024-07-15 20:20:35.156437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000009b cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.156460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.883 #39 NEW cov: 12133 ft: 14899 corp: 31/187b lim: 10 exec/s: 39 rss: 73Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:42.883 [2024-07-15 20:20:35.216150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000af9 cdw11:00000000 00:06:42.883 [2024-07-15 20:20:35.216177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.883 #40 NEW cov: 12133 ft: 14924 corp: 32/189b lim: 10 exec/s: 20 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:06:42.883 #40 DONE cov: 12133 ft: 14924 corp: 32/189b lim: 10 exec/s: 20 rss: 73Mb 00:06:42.883 ###### Recommended dictionary. ###### 00:06:42.883 "\000+?\30194w*" # Uses: 3 00:06:42.883 ###### End of recommended dictionary. ###### 00:06:42.883 Done 40 runs in 2 second(s) 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:43.142 20:20:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:43.142 [2024-07-15 20:20:35.418139] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:43.142 [2024-07-15 20:20:35.418206] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321285 ] 00:06:43.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.401 [2024-07-15 20:20:35.591997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.401 [2024-07-15 20:20:35.658197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.401 [2024-07-15 20:20:35.717395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.401 [2024-07-15 20:20:35.733664] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:43.401 INFO: Running with entropic power schedule (0xFF, 100). 00:06:43.401 INFO: Seed: 1731103297 00:06:43.401 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:43.401 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:43.401 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:43.401 INFO: A corpus is not provided, starting from an empty corpus 00:06:43.401 #2 INITED exec/s: 0 rss: 63Mb 00:06:43.401 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:43.401 This may also happen if the target rejected all inputs we tried so far 00:06:43.401 [2024-07-15 20:20:35.782825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:43.401 [2024-07-15 20:20:35.782853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.919 NEW_FUNC[1/696]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:43.919 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.919 #4 NEW cov: 11896 ft: 11903 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 2 ChangeBit-CrossOver- 00:06:43.919 [2024-07-15 20:20:36.103639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0a cdw11:00000000 00:06:43.919 [2024-07-15 20:20:36.103671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.919 #5 NEW cov: 12019 ft: 12442 corp: 3/6b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:06:43.919 [2024-07-15 20:20:36.153676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.919 [2024-07-15 20:20:36.153702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.919 #6 NEW cov: 12025 ft: 12669 corp: 4/8b lim: 10 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 CrossOver- 00:06:43.919 [2024-07-15 20:20:36.193790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.919 [2024-07-15 20:20:36.193815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.919 #7 NEW cov: 12110 ft: 12858 corp: 5/11b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:06:43.919 [2024-07-15 20:20:36.234220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:43.919 [2024-07-15 20:20:36.234245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.919 [2024-07-15 20:20:36.234293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.919 [2024-07-15 20:20:36.234307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.919 [2024-07-15 20:20:36.234357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.919 [2024-07-15 20:20:36.234370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.920 [2024-07-15 20:20:36.234420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:43.920 [2024-07-15 20:20:36.234432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.920 #8 NEW cov: 12110 ft: 13332 corp: 6/19b lim: 10 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:43.920 [2024-07-15 20:20:36.284140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f0e cdw11:00000000 00:06:43.920 [2024-07-15 20:20:36.284165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.920 [2024-07-15 20:20:36.284218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:43.920 [2024-07-15 20:20:36.284231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.179 #9 NEW cov: 12110 ft: 13607 corp: 7/23b lim: 10 exec/s: 0 rss: 70Mb L: 4/8 MS: 1 InsertByte- 00:06:44.179 [2024-07-15 20:20:36.334214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ef1 cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.334240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.179 #10 NEW cov: 12110 ft: 13664 corp: 8/26b lim: 10 exec/s: 0 rss: 70Mb L: 3/8 MS: 1 ChangeBinInt- 00:06:44.179 [2024-07-15 20:20:36.374530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.374558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.179 [2024-07-15 20:20:36.374635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.374649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.179 [2024-07-15 20:20:36.374715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.374729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.179 #11 NEW cov: 12110 ft: 13838 corp: 9/32b lim: 10 exec/s: 0 rss: 70Mb L: 6/8 MS: 1 InsertRepeatedBytes- 00:06:44.179 [2024-07-15 20:20:36.424801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aed cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.424826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.179 [2024-07-15 20:20:36.424877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000eded cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.424890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.179 [2024-07-15 20:20:36.424940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eded cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.424953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.179 [2024-07-15 20:20:36.425002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed0a cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.425015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.179 #12 NEW cov: 12110 ft: 13862 corp: 10/40b lim: 10 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:44.179 [2024-07-15 20:20:36.464568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f10e cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.464594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.179 #13 NEW cov: 12110 ft: 13903 corp: 11/43b lim: 10 exec/s: 0 rss: 71Mb L: 3/8 MS: 1 ShuffleBytes- 00:06:44.179 [2024-07-15 20:20:36.514789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f0a cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.514815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.179 [2024-07-15 20:20:36.514868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:44.179 [2024-07-15 20:20:36.514881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.179 #14 NEW cov: 12110 ft: 13943 corp: 12/47b lim: 10 exec/s: 0 rss: 71Mb L: 4/8 MS: 1 ChangeBit- 00:06:44.437 [2024-07-15 20:20:36.564904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.564929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.437 #15 NEW cov: 12110 ft: 14043 corp: 13/50b lim: 10 exec/s: 0 rss: 71Mb L: 3/8 MS: 1 ShuffleBytes- 00:06:44.437 [2024-07-15 20:20:36.605197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000da92 cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.605221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.605274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009292 cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.605290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.605344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009292 cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.605356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.437 #19 NEW cov: 12110 ft: 14060 corp: 14/57b lim: 10 exec/s: 0 rss: 71Mb L: 7/8 MS: 4 ChangeBit-CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:44.437 [2024-07-15 20:20:36.645168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000270e cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.645192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.645259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000e0a cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.645272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.437 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:44.437 #20 NEW cov: 12133 ft: 14093 corp: 15/61b lim: 10 exec/s: 0 rss: 71Mb L: 4/8 MS: 1 InsertByte- 00:06:44.437 [2024-07-15 20:20:36.695552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.695577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.695627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.695641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.695689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.695702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.695753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.695766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.437 #21 NEW cov: 12133 ft: 14148 corp: 16/69b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:44.437 [2024-07-15 20:20:36.745700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.745724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.745792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.745805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.745855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.745869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.745918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.745931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.437 #22 NEW cov: 12133 ft: 14177 corp: 17/77b lim: 10 exec/s: 22 rss: 71Mb L: 8/8 MS: 1 CopyPart- 00:06:44.437 [2024-07-15 20:20:36.795831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aed cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.795855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.795921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f3ed cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.795935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.795984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eded cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.795997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.437 [2024-07-15 20:20:36.796049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed0a cdw11:00000000 00:06:44.437 [2024-07-15 20:20:36.796062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.697 #23 NEW cov: 12133 ft: 14190 corp: 18/85b lim: 10 exec/s: 23 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:44.697 [2024-07-15 20:20:36.845770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000270e cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.845794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.845846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000270a cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.845859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.697 #24 NEW cov: 12133 ft: 14203 corp: 19/89b lim: 10 exec/s: 24 rss: 71Mb L: 4/8 MS: 1 CopyPart- 00:06:44.697 [2024-07-15 20:20:36.896018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.896043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.896098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.896112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.896165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000008e cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.896178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.697 #25 NEW cov: 12133 ft: 14255 corp: 20/95b lim: 10 exec/s: 25 rss: 71Mb L: 6/8 MS: 1 ChangeBit- 00:06:44.697 [2024-07-15 20:20:36.936222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000bed cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.936248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.936301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f3ed cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.936314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.936365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000eded cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.936378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.936433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ed0a cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.936451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.697 #26 NEW cov: 12133 ft: 14265 corp: 21/103b lim: 10 exec/s: 26 rss: 72Mb L: 8/8 MS: 1 ChangeBit- 00:06:44.697 [2024-07-15 20:20:36.986273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.986297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.986349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.986363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:36.986415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:06:44.697 [2024-07-15 20:20:36.986428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.697 #27 NEW cov: 12133 ft: 14281 corp: 22/109b lim: 10 exec/s: 27 rss: 72Mb L: 6/8 MS: 1 ShuffleBytes- 00:06:44.697 [2024-07-15 20:20:37.026497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a10 cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.026521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:37.026572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.026585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:37.026636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.026649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:37.026701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.026714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.697 #28 NEW cov: 12133 ft: 14313 corp: 23/117b lim: 10 exec/s: 28 rss: 72Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:44.697 [2024-07-15 20:20:37.066619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.066643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:37.066711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a11 cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.066725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:37.066776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.066789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.697 [2024-07-15 20:20:37.066840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 00:06:44.697 [2024-07-15 20:20:37.066853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.957 #29 NEW cov: 12133 ft: 14325 corp: 24/125b lim: 10 exec/s: 29 rss: 72Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:44.957 [2024-07-15 20:20:37.116764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.116788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.116854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff31 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.116867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.116917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.116930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.116981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.116995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.957 #30 NEW cov: 12133 ft: 14361 corp: 25/133b lim: 10 exec/s: 30 rss: 72Mb L: 8/8 MS: 1 ChangeByte- 00:06:44.957 [2024-07-15 20:20:37.166648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.166672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.166739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.166752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.957 #31 NEW cov: 12133 ft: 14369 corp: 26/137b lim: 10 exec/s: 31 rss: 72Mb L: 4/8 MS: 1 CopyPart- 00:06:44.957 [2024-07-15 20:20:37.207044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.207069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.207137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.207151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.207202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000d7d7 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.207215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.207267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.207280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:44.957 #32 NEW cov: 12133 ft: 14419 corp: 27/146b lim: 10 exec/s: 32 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:44.957 [2024-07-15 20:20:37.246748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0e cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.246772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.957 #33 NEW cov: 12133 ft: 14464 corp: 28/149b lim: 10 exec/s: 33 rss: 72Mb L: 3/9 MS: 1 InsertByte- 00:06:44.957 [2024-07-15 20:20:37.286961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003ff4 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.286985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.287052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f50e cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.287066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.957 #34 NEW cov: 12133 ft: 14472 corp: 29/153b lim: 10 exec/s: 34 rss: 72Mb L: 4/9 MS: 1 ChangeBinInt- 00:06:44.957 [2024-07-15 20:20:37.337249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000da92 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.337273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.337325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009292 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.337338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.957 [2024-07-15 20:20:37.337388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00007a92 cdw11:00000000 00:06:44.957 [2024-07-15 20:20:37.337402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.216 #35 NEW cov: 12133 ft: 14481 corp: 30/160b lim: 10 exec/s: 35 rss: 72Mb L: 7/9 MS: 1 ChangeByte- 00:06:45.216 [2024-07-15 20:20:37.387372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.387395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.387468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.387482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.387532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.387545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.216 #36 NEW cov: 12133 ft: 14486 corp: 31/166b lim: 10 exec/s: 36 rss: 72Mb L: 6/9 MS: 1 ShuffleBytes- 00:06:45.216 [2024-07-15 20:20:37.437640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.437664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.437732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.437745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.437794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.437807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.437857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.437870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.216 #37 NEW cov: 12133 ft: 14492 corp: 32/175b lim: 10 exec/s: 37 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:45.216 [2024-07-15 20:20:37.487675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.487700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.487756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002300 cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.487770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.216 [2024-07-15 20:20:37.487822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:06:45.216 [2024-07-15 20:20:37.487835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.216 #38 NEW cov: 12133 ft: 14505 corp: 33/181b lim: 10 exec/s: 38 rss: 72Mb L: 6/9 MS: 1 ChangeByte- 00:06:45.217 [2024-07-15 20:20:37.528020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.528046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.528112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a11 cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.528128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.528178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.528192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.528240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.528253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.528306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000e cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.528319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.217 #39 NEW cov: 12133 ft: 14620 corp: 34/191b lim: 10 exec/s: 39 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:45.217 [2024-07-15 20:20:37.577963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.577988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.578056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a11 cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.578070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.578120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000020 cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.578134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.217 [2024-07-15 20:20:37.578185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 00:06:45.217 [2024-07-15 20:20:37.578198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.476 #40 NEW cov: 12133 ft: 14644 corp: 35/199b lim: 10 exec/s: 40 rss: 72Mb L: 8/10 MS: 1 ChangeBit- 00:06:45.476 [2024-07-15 20:20:37.618154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ae1 cdw11:00000000 00:06:45.476 [2024-07-15 20:20:37.618180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.476 [2024-07-15 20:20:37.618248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.476 [2024-07-15 20:20:37.618265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.476 [2024-07-15 20:20:37.618317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.476 [2024-07-15 20:20:37.618330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.476 [2024-07-15 20:20:37.618381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.476 [2024-07-15 20:20:37.618395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.476 [2024-07-15 20:20:37.618449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.476 [2024-07-15 20:20:37.618464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.476 [2024-07-15 20:20:37.658338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a32 cdw11:00000000 00:06:45.476 [2024-07-15 20:20:37.658361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.658430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.658448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.658498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.658512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.658562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.658575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.658635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.658648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.477 #43 NEW cov: 12133 ft: 14648 corp: 36/209b lim: 10 exec/s: 43 rss: 72Mb L: 10/10 MS: 3 CrossOver-InsertRepeatedBytes-ChangeByte- 00:06:45.477 [2024-07-15 20:20:37.698100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000270a cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.698125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.698179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000e0e cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.698192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.477 #44 NEW cov: 12133 ft: 14668 corp: 37/214b lim: 10 exec/s: 44 rss: 72Mb L: 5/10 MS: 1 CopyPart- 00:06:45.477 [2024-07-15 20:20:37.738326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003ff4 cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.738351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.738404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f50e cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.738418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.477 [2024-07-15 20:20:37.738471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000404a cdw11:00000000 00:06:45.477 [2024-07-15 20:20:37.738484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.477 #45 NEW cov: 12133 ft: 14682 corp: 38/220b lim: 10 exec/s: 22 rss: 72Mb L: 6/10 MS: 1 CMP- DE: "@J"- 00:06:45.477 #45 DONE cov: 12133 ft: 14682 corp: 38/220b lim: 10 exec/s: 22 rss: 72Mb 00:06:45.477 ###### Recommended dictionary. ###### 00:06:45.477 "@J" # Uses: 0 00:06:45.477 ###### End of recommended dictionary. ###### 00:06:45.477 Done 45 runs in 2 second(s) 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:45.766 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:45.767 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.767 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.767 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.767 20:20:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:45.767 [2024-07-15 20:20:37.941919] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:45.767 [2024-07-15 20:20:37.942010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321736 ] 00:06:45.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.767 [2024-07-15 20:20:38.124980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.027 [2024-07-15 20:20:38.191821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.027 [2024-07-15 20:20:38.250956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.027 [2024-07-15 20:20:38.267248] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:46.027 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.027 INFO: Seed: 4266098091 00:06:46.027 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:46.027 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:46.027 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:46.027 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.027 [2024-07-15 20:20:38.312511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.027 [2024-07-15 20:20:38.312539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.027 #2 INITED cov: 11926 ft: 11907 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:46.027 [2024-07-15 20:20:38.352487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.027 [2024-07-15 20:20:38.352512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.027 #3 NEW cov: 12047 ft: 12266 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 CrossOver- 00:06:46.027 [2024-07-15 20:20:38.402667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.027 [2024-07-15 20:20:38.402692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 #4 NEW cov: 12053 ft: 12405 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBit- 00:06:46.287 [2024-07-15 20:20:38.442907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.442931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.443000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.443014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.287 #5 NEW cov: 12138 ft: 13548 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:46.287 [2024-07-15 20:20:38.483270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.483293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.483364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.483378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.483435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.483452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.483521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.483535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.287 #6 NEW cov: 12138 ft: 13898 corp: 5/9b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:46.287 [2024-07-15 20:20:38.523424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.523455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.523542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.523556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.523610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.523624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.523678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.523691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.287 #7 NEW cov: 12138 ft: 13975 corp: 6/13b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeBit- 00:06:46.287 [2024-07-15 20:20:38.573763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.573788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.573858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.573872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.573926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.573940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.573996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.574009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.574065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.574079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.287 #8 NEW cov: 12138 ft: 14103 corp: 7/18b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:06:46.287 [2024-07-15 20:20:38.623270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.623295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 #9 NEW cov: 12138 ft: 14170 corp: 8/19b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:06:46.287 [2024-07-15 20:20:38.663693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.663719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.663777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.663791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.287 [2024-07-15 20:20:38.663854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.287 [2024-07-15 20:20:38.663868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.547 #10 NEW cov: 12138 ft: 14348 corp: 9/22b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 CMP- DE: "\001\012"- 00:06:46.547 [2024-07-15 20:20:38.713686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.713710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.713778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.713792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.547 #11 NEW cov: 12138 ft: 14377 corp: 10/24b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 InsertByte- 00:06:46.547 [2024-07-15 20:20:38.764283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.764308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.764363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.764378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.764433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.764450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.764506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.764519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.764574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.764587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.547 #12 NEW cov: 12138 ft: 14418 corp: 11/29b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:06:46.547 [2024-07-15 20:20:38.814101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.814125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.814183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.814196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.814253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.814269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.547 #13 NEW cov: 12138 ft: 14444 corp: 12/32b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:46.547 [2024-07-15 20:20:38.864233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.864258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.864313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.864327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.864382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.864395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.547 #14 NEW cov: 12138 ft: 14467 corp: 13/35b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 CrossOver- 00:06:46.547 [2024-07-15 20:20:38.904684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.904708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.904765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.904779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.904834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.904848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.904903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.904916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.547 [2024-07-15 20:20:38.904970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.547 [2024-07-15 20:20:38.904984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.547 #15 NEW cov: 12138 ft: 14487 corp: 14/40b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:06:46.806 [2024-07-15 20:20:38.944446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.806 [2024-07-15 20:20:38.944470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.806 [2024-07-15 20:20:38.944543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.806 [2024-07-15 20:20:38.944556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.806 [2024-07-15 20:20:38.944611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:38.944628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.807 #16 NEW cov: 12138 ft: 14492 corp: 15/43b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:06:46.807 [2024-07-15 20:20:38.994459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:38.994484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:38.994566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:38.994579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.807 #17 NEW cov: 12138 ft: 14501 corp: 16/45b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:06:46.807 [2024-07-15 20:20:39.044741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.044765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.044837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.044850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.044904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.044917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.807 #18 NEW cov: 12138 ft: 14535 corp: 17/48b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:06:46.807 [2024-07-15 20:20:39.084669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.084694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.084748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.084762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.807 #19 NEW cov: 12138 ft: 14587 corp: 18/50b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:46.807 [2024-07-15 20:20:39.125251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.125276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.125347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.125360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.125416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.125430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.125457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.125468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.125487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.125497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.807 #20 NEW cov: 12147 ft: 14617 corp: 19/55b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "\034\000"- 00:06:46.807 [2024-07-15 20:20:39.175156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.175182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.175238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.175252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.807 [2024-07-15 20:20:39.175306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.807 [2024-07-15 20:20:39.175319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.325 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:47.325 #21 NEW cov: 12170 ft: 14656 corp: 20/58b lim: 5 exec/s: 21 rss: 71Mb L: 3/5 MS: 1 ChangeByte- 00:06:47.325 [2024-07-15 20:20:39.486178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.486219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.486291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.486310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.486382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.486399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.325 #22 NEW cov: 12170 ft: 14673 corp: 21/61b lim: 5 exec/s: 22 rss: 72Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:47.325 [2024-07-15 20:20:39.536355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.536380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.536447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.536461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.536522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.536539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.536596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.536609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.325 #23 NEW cov: 12170 ft: 14701 corp: 22/65b lim: 5 exec/s: 23 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:47.325 [2024-07-15 20:20:39.576616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.576642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.576702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.576716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.576776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.576790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.576845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.576859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.576920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.576934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.325 #24 NEW cov: 12170 ft: 14715 corp: 23/70b lim: 5 exec/s: 24 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:06:47.325 [2024-07-15 20:20:39.616031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.616056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.325 #25 NEW cov: 12170 ft: 14809 corp: 24/71b lim: 5 exec/s: 25 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:47.325 [2024-07-15 20:20:39.656386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.656411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.325 [2024-07-15 20:20:39.656493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.325 [2024-07-15 20:20:39.656508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.325 #26 NEW cov: 12170 ft: 14920 corp: 25/73b lim: 5 exec/s: 26 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:47.585 [2024-07-15 20:20:39.706997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.707023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.707086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.707100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.707158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.707172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.707229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.707243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.707303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.707317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.585 #27 NEW cov: 12170 ft: 14932 corp: 26/78b lim: 5 exec/s: 27 rss: 72Mb L: 5/5 MS: 1 ChangeBinInt- 00:06:47.585 [2024-07-15 20:20:39.746776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.746800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.746860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.746874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.746951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.746965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.585 #28 NEW cov: 12170 ft: 14938 corp: 27/81b lim: 5 exec/s: 28 rss: 72Mb L: 3/5 MS: 1 ChangeBinInt- 00:06:47.585 [2024-07-15 20:20:39.796929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.796954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.797032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.797046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.797105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.797119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.585 #29 NEW cov: 12170 ft: 14961 corp: 28/84b lim: 5 exec/s: 29 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:06:47.585 [2024-07-15 20:20:39.847375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.847400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.847479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.847494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.847551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.847564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.847635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.847648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.847710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.847723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.585 #30 NEW cov: 12170 ft: 14977 corp: 29/89b lim: 5 exec/s: 30 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:06:47.585 [2024-07-15 20:20:39.897197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.897222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.897284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.897298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.897358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.897372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.585 #31 NEW cov: 12170 ft: 14984 corp: 30/92b lim: 5 exec/s: 31 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:06:47.585 [2024-07-15 20:20:39.937326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.937351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.937415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.937429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.585 [2024-07-15 20:20:39.937495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.585 [2024-07-15 20:20:39.937509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.585 #32 NEW cov: 12170 ft: 15015 corp: 31/95b lim: 5 exec/s: 32 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:06:47.845 [2024-07-15 20:20:39.977617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:39.977645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:39.977706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:39.977720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:39.977781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:39.977794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:39.977853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:39.977866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.845 #33 NEW cov: 12170 ft: 15019 corp: 32/99b lim: 5 exec/s: 33 rss: 72Mb L: 4/5 MS: 1 ChangeByte- 00:06:47.845 [2024-07-15 20:20:40.017688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.017713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.017776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.017790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.017853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.017867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.017927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.017940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.845 #34 NEW cov: 12170 ft: 15028 corp: 33/103b lim: 5 exec/s: 34 rss: 72Mb L: 4/5 MS: 1 ChangeBinInt- 00:06:47.845 [2024-07-15 20:20:40.057686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.057712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.057774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.057788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.057849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.057863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.845 #35 NEW cov: 12170 ft: 15043 corp: 34/106b lim: 5 exec/s: 35 rss: 72Mb L: 3/5 MS: 1 PersAutoDict- DE: "\034\000"- 00:06:47.845 [2024-07-15 20:20:40.097816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.097845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.097907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.097921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.097984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.097998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.845 #36 NEW cov: 12170 ft: 15092 corp: 35/109b lim: 5 exec/s: 36 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:06:47.845 [2024-07-15 20:20:40.137575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.137599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.845 #37 NEW cov: 12170 ft: 15147 corp: 36/110b lim: 5 exec/s: 37 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:47.845 [2024-07-15 20:20:40.178020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.178044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.178120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.178134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.845 [2024-07-15 20:20:40.178194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.845 [2024-07-15 20:20:40.178207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.845 #38 NEW cov: 12170 ft: 15237 corp: 37/113b lim: 5 exec/s: 38 rss: 73Mb L: 3/5 MS: 1 ChangeByte- 00:06:48.104 [2024-07-15 20:20:40.228549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.228574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.104 [2024-07-15 20:20:40.228635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.228649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.104 [2024-07-15 20:20:40.228706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.228719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.104 [2024-07-15 20:20:40.228774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.228788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.104 [2024-07-15 20:20:40.228846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.228860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.104 #39 NEW cov: 12170 ft: 15306 corp: 38/118b lim: 5 exec/s: 39 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:48.104 [2024-07-15 20:20:40.278327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.278351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.104 [2024-07-15 20:20:40.278412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.104 [2024-07-15 20:20:40.278425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.105 [2024-07-15 20:20:40.278503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.105 [2024-07-15 20:20:40.278517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.105 #40 NEW cov: 12170 ft: 15352 corp: 39/121b lim: 5 exec/s: 20 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:06:48.105 #40 DONE cov: 12170 ft: 15352 corp: 39/121b lim: 5 exec/s: 20 rss: 73Mb 00:06:48.105 ###### Recommended dictionary. ###### 00:06:48.105 "\001\012" # Uses: 0 00:06:48.105 "\034\000" # Uses: 1 00:06:48.105 ###### End of recommended dictionary. ###### 00:06:48.105 Done 40 runs in 2 second(s) 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.105 20:20:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:48.105 [2024-07-15 20:20:40.464937] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:48.105 [2024-07-15 20:20:40.465006] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322104 ] 00:06:48.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.364 [2024-07-15 20:20:40.645890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.364 [2024-07-15 20:20:40.713472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.623 [2024-07-15 20:20:40.773302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.623 [2024-07-15 20:20:40.789621] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:48.623 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.623 INFO: Seed: 2493140653 00:06:48.623 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:48.623 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:48.623 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:48.623 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.623 [2024-07-15 20:20:40.834872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.623 [2024-07-15 20:20:40.834900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.623 #2 INITED cov: 11925 ft: 11907 corp: 1/1b exec/s: 0 rss: 69Mb 00:06:48.623 [2024-07-15 20:20:40.874824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.623 [2024-07-15 20:20:40.874851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.623 #3 NEW cov: 12047 ft: 12407 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ShuffleBytes- 00:06:48.623 [2024-07-15 20:20:40.925020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.623 [2024-07-15 20:20:40.925045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.623 #4 NEW cov: 12053 ft: 12548 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBit- 00:06:48.623 [2024-07-15 20:20:40.965249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.623 [2024-07-15 20:20:40.965275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.623 [2024-07-15 20:20:40.965333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.623 [2024-07-15 20:20:40.965347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.623 #5 NEW cov: 12138 ft: 13593 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:48.883 [2024-07-15 20:20:41.015259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.015286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.883 #6 NEW cov: 12138 ft: 13671 corp: 5/6b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:48.883 [2024-07-15 20:20:41.065421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.065451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.883 #7 NEW cov: 12138 ft: 13720 corp: 6/7b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ShuffleBytes- 00:06:48.883 [2024-07-15 20:20:41.105639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.105664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.883 [2024-07-15 20:20:41.105718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.105731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.883 #8 NEW cov: 12138 ft: 13834 corp: 7/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeByte- 00:06:48.883 [2024-07-15 20:20:41.155667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.155692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.883 #9 NEW cov: 12138 ft: 13897 corp: 8/10b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:48.883 [2024-07-15 20:20:41.196217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.196241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.883 [2024-07-15 20:20:41.196313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.196326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.883 [2024-07-15 20:20:41.196383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.196397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.883 [2024-07-15 20:20:41.196456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.196469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.883 #10 NEW cov: 12138 ft: 14231 corp: 9/14b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:48.883 [2024-07-15 20:20:41.246060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.246084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.883 [2024-07-15 20:20:41.246140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.883 [2024-07-15 20:20:41.246154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.142 #11 NEW cov: 12138 ft: 14259 corp: 10/16b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:06:49.142 [2024-07-15 20:20:41.296186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.296210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.142 [2024-07-15 20:20:41.296286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.296299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.142 #12 NEW cov: 12138 ft: 14275 corp: 11/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CrossOver- 00:06:49.142 [2024-07-15 20:20:41.346366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.346390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.142 [2024-07-15 20:20:41.346467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.346481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.142 #13 NEW cov: 12138 ft: 14309 corp: 12/20b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:06:49.142 [2024-07-15 20:20:41.396361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.396386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.142 #14 NEW cov: 12138 ft: 14320 corp: 13/21b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:06:49.142 [2024-07-15 20:20:41.436876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.436900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.142 [2024-07-15 20:20:41.436969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.436982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.142 [2024-07-15 20:20:41.437035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.437048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.142 [2024-07-15 20:20:41.437100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.437113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.142 #15 NEW cov: 12138 ft: 14369 corp: 14/25b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:06:49.142 [2024-07-15 20:20:41.486595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.142 [2024-07-15 20:20:41.486620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.142 #16 NEW cov: 12138 ft: 14451 corp: 15/26b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 CrossOver- 00:06:49.400 [2024-07-15 20:20:41.526889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.526913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.401 [2024-07-15 20:20:41.526971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.526988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.401 #17 NEW cov: 12138 ft: 14463 corp: 16/28b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 ChangeByte- 00:06:49.401 [2024-07-15 20:20:41.566860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.566884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.401 #18 NEW cov: 12138 ft: 14531 corp: 17/29b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeByte- 00:06:49.401 [2024-07-15 20:20:41.617305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.617330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.401 [2024-07-15 20:20:41.617402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.617416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.401 [2024-07-15 20:20:41.617472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.617485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.401 #19 NEW cov: 12138 ft: 14724 corp: 18/32b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 EraseBytes- 00:06:49.401 [2024-07-15 20:20:41.657122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.657146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.401 #20 NEW cov: 12138 ft: 14741 corp: 19/33b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 EraseBytes- 00:06:49.401 [2024-07-15 20:20:41.697385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.697409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.401 [2024-07-15 20:20:41.697484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.401 [2024-07-15 20:20:41.697499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.660 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:49.660 #21 NEW cov: 12161 ft: 14780 corp: 20/35b lim: 5 exec/s: 21 rss: 71Mb L: 2/4 MS: 1 CopyPart- 00:06:49.660 [2024-07-15 20:20:41.998168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.660 [2024-07-15 20:20:41.998203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.660 [2024-07-15 20:20:41.998267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.660 [2024-07-15 20:20:41.998282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.660 #22 NEW cov: 12161 ft: 14877 corp: 21/37b lim: 5 exec/s: 22 rss: 71Mb L: 2/4 MS: 1 CrossOver- 00:06:49.660 [2024-07-15 20:20:42.038188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.660 [2024-07-15 20:20:42.038214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.660 [2024-07-15 20:20:42.038272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.660 [2024-07-15 20:20:42.038286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.919 #23 NEW cov: 12161 ft: 14928 corp: 22/39b lim: 5 exec/s: 23 rss: 72Mb L: 2/4 MS: 1 EraseBytes- 00:06:49.919 [2024-07-15 20:20:42.088195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.088220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.919 #24 NEW cov: 12161 ft: 14945 corp: 23/40b lim: 5 exec/s: 24 rss: 72Mb L: 1/4 MS: 1 ShuffleBytes- 00:06:49.919 [2024-07-15 20:20:42.128897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.128922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.128978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.128992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.129046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.129060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.129113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.129126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.129179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.129192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.919 #25 NEW cov: 12161 ft: 15022 corp: 24/45b lim: 5 exec/s: 25 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:06:49.919 [2024-07-15 20:20:42.168579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.168604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.168673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.168687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.919 #26 NEW cov: 12161 ft: 15084 corp: 25/47b lim: 5 exec/s: 26 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:06:49.919 [2024-07-15 20:20:42.218531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.218562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.919 #27 NEW cov: 12161 ft: 15127 corp: 26/48b lim: 5 exec/s: 27 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:06:49.919 [2024-07-15 20:20:42.248990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.249016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.249074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.249087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.919 [2024-07-15 20:20:42.249141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.249155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.919 #28 NEW cov: 12161 ft: 15147 corp: 27/51b lim: 5 exec/s: 28 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:06:49.919 [2024-07-15 20:20:42.298797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.919 [2024-07-15 20:20:42.298822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.179 #29 NEW cov: 12161 ft: 15160 corp: 28/52b lim: 5 exec/s: 29 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:50.179 [2024-07-15 20:20:42.339210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.339236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.339293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.339306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.339363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.339377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.179 #30 NEW cov: 12161 ft: 15167 corp: 29/55b lim: 5 exec/s: 30 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:06:50.179 [2024-07-15 20:20:42.389175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.389200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.389272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.389286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.179 #31 NEW cov: 12161 ft: 15174 corp: 30/57b lim: 5 exec/s: 31 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:50.179 [2024-07-15 20:20:42.429435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.429467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.429538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.429552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.429607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.429620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.179 #32 NEW cov: 12161 ft: 15206 corp: 31/60b lim: 5 exec/s: 32 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:06:50.179 [2024-07-15 20:20:42.469418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.469448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.469521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.469535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.179 #33 NEW cov: 12161 ft: 15231 corp: 32/62b lim: 5 exec/s: 33 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:50.179 [2024-07-15 20:20:42.519737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.179 [2024-07-15 20:20:42.519762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.179 [2024-07-15 20:20:42.519834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.180 [2024-07-15 20:20:42.519848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.180 [2024-07-15 20:20:42.519904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.180 [2024-07-15 20:20:42.519918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.180 #34 NEW cov: 12161 ft: 15234 corp: 33/65b lim: 5 exec/s: 34 rss: 72Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:50.180 [2024-07-15 20:20:42.559664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.180 [2024-07-15 20:20:42.559689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.180 [2024-07-15 20:20:42.559744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.180 [2024-07-15 20:20:42.559758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.439 #35 NEW cov: 12161 ft: 15242 corp: 34/67b lim: 5 exec/s: 35 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:50.439 [2024-07-15 20:20:42.609839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.609863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.439 [2024-07-15 20:20:42.609935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.609952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.439 #36 NEW cov: 12161 ft: 15251 corp: 35/69b lim: 5 exec/s: 36 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:06:50.439 [2024-07-15 20:20:42.649741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.649765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.439 #37 NEW cov: 12161 ft: 15267 corp: 36/70b lim: 5 exec/s: 37 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:50.439 [2024-07-15 20:20:42.690012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.690037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.439 [2024-07-15 20:20:42.690092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.690106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.439 #38 NEW cov: 12161 ft: 15276 corp: 37/72b lim: 5 exec/s: 38 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:50.439 [2024-07-15 20:20:42.730119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.730143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.439 [2024-07-15 20:20:42.730198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.730211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.439 #39 NEW cov: 12161 ft: 15304 corp: 38/74b lim: 5 exec/s: 39 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:50.439 [2024-07-15 20:20:42.780290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.780313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.439 [2024-07-15 20:20:42.780366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.439 [2024-07-15 20:20:42.780379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.439 #40 NEW cov: 12161 ft: 15313 corp: 39/76b lim: 5 exec/s: 40 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:50.698 [2024-07-15 20:20:42.830261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.698 [2024-07-15 20:20:42.830285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.698 #41 NEW cov: 12161 ft: 15320 corp: 40/77b lim: 5 exec/s: 20 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:50.698 #41 DONE cov: 12161 ft: 15320 corp: 40/77b lim: 5 exec/s: 20 rss: 73Mb 00:06:50.698 Done 41 runs in 2 second(s) 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:50.698 20:20:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.698 20:20:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.698 20:20:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.698 20:20:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:50.698 [2024-07-15 20:20:43.031169] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:50.698 [2024-07-15 20:20:43.031239] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322633 ] 00:06:50.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.958 [2024-07-15 20:20:43.209821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.958 [2024-07-15 20:20:43.275154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.958 [2024-07-15 20:20:43.334652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.216 [2024-07-15 20:20:43.350886] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:51.216 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.216 INFO: Seed: 758169961 00:06:51.216 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:51.216 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:51.216 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:51.216 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.216 #2 INITED exec/s: 0 rss: 63Mb 00:06:51.216 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.216 This may also happen if the target rejected all inputs we tried so far 00:06:51.216 [2024-07-15 20:20:43.410796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.216 [2024-07-15 20:20:43.410834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.216 [2024-07-15 20:20:43.410970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.216 [2024-07-15 20:20:43.410988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.474 NEW_FUNC[1/697]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:51.474 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:51.474 #4 NEW cov: 11957 ft: 11949 corp: 2/24b lim: 40 exec/s: 0 rss: 70Mb L: 23/23 MS: 2 CMP-InsertRepeatedBytes- DE: "j\000\000\000"- 00:06:51.474 [2024-07-15 20:20:43.741838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.474 [2024-07-15 20:20:43.741878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.474 [2024-07-15 20:20:43.742013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000055 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.474 [2024-07-15 20:20:43.742029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.474 [2024-07-15 20:20:43.742158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.474 [2024-07-15 20:20:43.742176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.474 #5 NEW cov: 12070 ft: 12868 corp: 3/48b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 InsertByte- 00:06:51.474 [2024-07-15 20:20:43.801109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a000000 cdw11:0a6a006a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.474 [2024-07-15 20:20:43.801140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.474 #9 NEW cov: 12076 ft: 13357 corp: 4/56b lim: 40 exec/s: 0 rss: 71Mb L: 8/24 MS: 4 PersAutoDict-EraseBytes-PersAutoDict-CopyPart- DE: "j\000\000\000"-"j\000\000\000"- 00:06:51.474 [2024-07-15 20:20:43.841240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a000000 cdw11:0a6a006a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.474 [2024-07-15 20:20:43.841269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.733 #10 NEW cov: 12161 ft: 13695 corp: 5/64b lim: 40 exec/s: 0 rss: 71Mb L: 8/24 MS: 1 CrossOver- 00:06:51.733 [2024-07-15 20:20:43.891981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.892010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:43.892156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.892172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.733 #11 NEW cov: 12161 ft: 13810 corp: 6/87b lim: 40 exec/s: 0 rss: 71Mb L: 23/24 MS: 1 ChangeByte- 00:06:51.733 [2024-07-15 20:20:43.932062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a006a00 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.932090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:43.932218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a000a6a cdw11:006a006a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.932235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.733 #12 NEW cov: 12161 ft: 13952 corp: 7/103b lim: 40 exec/s: 0 rss: 71Mb L: 16/24 MS: 1 CrossOver- 00:06:51.733 [2024-07-15 20:20:43.972829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.972856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:43.972997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.973016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:43.973147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.973163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:43.973298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.973315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:43.973447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:43.973465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.733 #13 NEW cov: 12161 ft: 14502 corp: 8/143b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:51.733 [2024-07-15 20:20:44.012932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.733 [2024-07-15 20:20:44.012960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.733 [2024-07-15 20:20:44.013100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.734 [2024-07-15 20:20:44.013118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.734 [2024-07-15 20:20:44.013248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.734 [2024-07-15 20:20:44.013265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.734 [2024-07-15 20:20:44.013401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.734 [2024-07-15 20:20:44.013418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.734 [2024-07-15 20:20:44.013560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:fc000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.734 [2024-07-15 20:20:44.013578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.734 #14 NEW cov: 12161 ft: 14525 corp: 9/183b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:51.734 [2024-07-15 20:20:44.072451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.734 [2024-07-15 20:20:44.072480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.734 [2024-07-15 20:20:44.072619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.734 [2024-07-15 20:20:44.072637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.734 #15 NEW cov: 12161 ft: 14584 corp: 10/203b lim: 40 exec/s: 0 rss: 71Mb L: 20/40 MS: 1 InsertRepeatedBytes- 00:06:51.994 [2024-07-15 20:20:44.122553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a6a6a6a cdw11:6a146a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.122582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.122709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.122727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.994 #21 NEW cov: 12161 ft: 14671 corp: 11/223b lim: 40 exec/s: 0 rss: 72Mb L: 20/40 MS: 1 ChangeBinInt- 00:06:51.994 [2024-07-15 20:20:44.173069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.173107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.173241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.173261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.173395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.173413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.994 #22 NEW cov: 12161 ft: 14754 corp: 12/252b lim: 40 exec/s: 0 rss: 72Mb L: 29/40 MS: 1 CrossOver- 00:06:51.994 [2024-07-15 20:20:44.212926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.212952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.213081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.213099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.994 #23 NEW cov: 12161 ft: 14778 corp: 13/275b lim: 40 exec/s: 0 rss: 72Mb L: 23/40 MS: 1 ChangeBit- 00:06:51.994 [2024-07-15 20:20:44.253189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.253216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.253353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a6a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.253371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.253497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.253518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.994 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:51.994 #24 NEW cov: 12184 ft: 14835 corp: 14/304b lim: 40 exec/s: 0 rss: 72Mb L: 29/40 MS: 1 CrossOver- 00:06:51.994 [2024-07-15 20:20:44.303821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000fb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.303848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.303981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.303999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.304120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.304136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.304280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.304296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.304426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:fc000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.304445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.994 #25 NEW cov: 12184 ft: 14852 corp: 15/344b lim: 40 exec/s: 0 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:51.994 [2024-07-15 20:20:44.353433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.353465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.353597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a6a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.353615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.994 [2024-07-15 20:20:44.353754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.994 [2024-07-15 20:20:44.353771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.348 #26 NEW cov: 12184 ft: 14894 corp: 16/373b lim: 40 exec/s: 26 rss: 72Mb L: 29/40 MS: 1 ChangeByte- 00:06:52.348 [2024-07-15 20:20:44.413410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.413439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.413574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.413591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.348 #27 NEW cov: 12184 ft: 14960 corp: 17/391b lim: 40 exec/s: 27 rss: 72Mb L: 18/40 MS: 1 EraseBytes- 00:06:52.348 [2024-07-15 20:20:44.453481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a6a6a6a cdw11:6a146a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.453509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.453653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a7a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.453671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.348 #28 NEW cov: 12184 ft: 14980 corp: 18/411b lim: 40 exec/s: 28 rss: 72Mb L: 20/40 MS: 1 ChangeBit- 00:06:52.348 [2024-07-15 20:20:44.503631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.503659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.503798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:db6a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.503814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.348 #29 NEW cov: 12184 ft: 15015 corp: 19/432b lim: 40 exec/s: 29 rss: 72Mb L: 21/40 MS: 1 InsertByte- 00:06:52.348 [2024-07-15 20:20:44.544393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000fb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.544421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.544555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.544572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.544694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.544711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.544840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00fbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.544858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.544987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:fbfbfbfb cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.545004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.348 #30 NEW cov: 12184 ft: 15029 corp: 20/472b lim: 40 exec/s: 30 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:52.348 [2024-07-15 20:20:44.594162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a5b6a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.594190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.594332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.594351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.348 [2024-07-15 20:20:44.594490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.594508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.348 #31 NEW cov: 12184 ft: 15037 corp: 21/496b lim: 40 exec/s: 31 rss: 72Mb L: 24/40 MS: 1 InsertByte- 00:06:52.348 [2024-07-15 20:20:44.633898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a000028 cdw11:000a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.633926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 #33 NEW cov: 12184 ft: 15053 corp: 22/504b lim: 40 exec/s: 33 rss: 72Mb L: 8/40 MS: 2 EraseBytes-InsertByte- 00:06:52.348 [2024-07-15 20:20:44.673965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.348 [2024-07-15 20:20:44.673991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.348 #34 NEW cov: 12184 ft: 15083 corp: 23/512b lim: 40 exec/s: 34 rss: 72Mb L: 8/40 MS: 1 ChangeBinInt- 00:06:52.649 [2024-07-15 20:20:44.714913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.714940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.715073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.715092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.715227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.715243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.715380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00002800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.715398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.715530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.715550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.649 #35 NEW cov: 12184 ft: 15139 corp: 24/552b lim: 40 exec/s: 35 rss: 72Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:52.649 [2024-07-15 20:20:44.754626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a6a6a6a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.754654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.754788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a6a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.754804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.754946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a6a006a cdw11:6a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.754964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.649 #36 NEW cov: 12184 ft: 15168 corp: 25/576b lim: 40 exec/s: 36 rss: 72Mb L: 24/40 MS: 1 PersAutoDict- DE: "j\000\000\000"- 00:06:52.649 [2024-07-15 20:20:44.795176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000fb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.795205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.795345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.795362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.795504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.795522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.795656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:00fcfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.795674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.795812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:fc000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.795828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.649 #37 NEW cov: 12184 ft: 15190 corp: 26/616b lim: 40 exec/s: 37 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:52.649 [2024-07-15 20:20:44.834712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a000000 cdw11:0a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.834739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.834875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a00 cdw11:00000a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.834892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.649 #38 NEW cov: 12184 ft: 15202 corp: 27/639b lim: 40 exec/s: 38 rss: 72Mb L: 23/40 MS: 1 CrossOver- 00:06:52.649 [2024-07-15 20:20:44.875478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.649 [2024-07-15 20:20:44.875504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.649 [2024-07-15 20:20:44.875641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.875658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.650 [2024-07-15 20:20:44.875787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.875806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.650 [2024-07-15 20:20:44.875940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.875957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.650 [2024-07-15 20:20:44.876086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.876102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.650 #39 NEW cov: 12184 ft: 15226 corp: 28/679b lim: 40 exec/s: 39 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:52.650 [2024-07-15 20:20:44.914920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.914949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.650 [2024-07-15 20:20:44.915092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.915109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.650 #40 NEW cov: 12184 ft: 15252 corp: 29/702b lim: 40 exec/s: 40 rss: 72Mb L: 23/40 MS: 1 ChangeBinInt- 00:06:52.650 [2024-07-15 20:20:44.965276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.965303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.650 [2024-07-15 20:20:44.965448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000055 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.965467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.650 [2024-07-15 20:20:44.965595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00006a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:44.965613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.650 #41 NEW cov: 12184 ft: 15264 corp: 30/726b lim: 40 exec/s: 41 rss: 72Mb L: 24/40 MS: 1 PersAutoDict- DE: "j\000\000\000"- 00:06:52.650 [2024-07-15 20:20:45.015067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a000028 cdw11:0028000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.650 [2024-07-15 20:20:45.015093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.909 #42 NEW cov: 12184 ft: 15316 corp: 31/737b lim: 40 exec/s: 42 rss: 73Mb L: 11/40 MS: 1 CopyPart- 00:06:52.909 [2024-07-15 20:20:45.065070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6b0000 cdw11:000a6b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.909 [2024-07-15 20:20:45.065098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.909 #45 NEW cov: 12184 ft: 15345 corp: 32/747b lim: 40 exec/s: 45 rss: 73Mb L: 10/40 MS: 3 PersAutoDict-ChangeBit-CopyPart- DE: "j\000\000\000"- 00:06:52.909 [2024-07-15 20:20:45.105239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:6a000000 cdw11:0a6a7a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.909 [2024-07-15 20:20:45.105270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.909 #46 NEW cov: 12184 ft: 15353 corp: 33/756b lim: 40 exec/s: 46 rss: 73Mb L: 9/40 MS: 1 InsertByte- 00:06:52.909 [2024-07-15 20:20:45.146270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000fb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.909 [2024-07-15 20:20:45.146297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.909 [2024-07-15 20:20:45.146430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.909 [2024-07-15 20:20:45.146450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.909 [2024-07-15 20:20:45.146581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfbfb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.909 [2024-07-15 20:20:45.146599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.910 [2024-07-15 20:20:45.146727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfbfb00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.146744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.910 [2024-07-15 20:20:45.146875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:fc000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.146893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.910 #47 NEW cov: 12184 ft: 15385 corp: 34/796b lim: 40 exec/s: 47 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:06:52.910 [2024-07-15 20:20:45.185516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:006a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.185544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.910 #48 NEW cov: 12184 ft: 15394 corp: 35/808b lim: 40 exec/s: 48 rss: 73Mb L: 12/40 MS: 1 PersAutoDict- DE: "j\000\000\000"- 00:06:52.910 [2024-07-15 20:20:45.236346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.236373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.910 [2024-07-15 20:20:45.236503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.236521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.910 [2024-07-15 20:20:45.236651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0a6a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.236667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.910 [2024-07-15 20:20:45.236801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.236818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.910 #49 NEW cov: 12184 ft: 15406 corp: 36/846b lim: 40 exec/s: 49 rss: 73Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:06:52.910 [2024-07-15 20:20:45.275887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:32000000 cdw11:0a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.275916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.910 [2024-07-15 20:20:45.276047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6a6a6a00 cdw11:00000a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.910 [2024-07-15 20:20:45.276064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.170 #50 NEW cov: 12184 ft: 15438 corp: 37/869b lim: 40 exec/s: 50 rss: 73Mb L: 23/40 MS: 1 ChangeByte- 00:06:53.170 [2024-07-15 20:20:45.326106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a6a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.170 [2024-07-15 20:20:45.326136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.170 [2024-07-15 20:20:45.326266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f80004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.170 [2024-07-15 20:20:45.326284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.170 #51 NEW cov: 12184 ft: 15440 corp: 38/892b lim: 40 exec/s: 51 rss: 73Mb L: 23/40 MS: 1 ChangeBinInt- 00:06:53.170 [2024-07-15 20:20:45.366492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:006a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.170 [2024-07-15 20:20:45.366520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.170 [2024-07-15 20:20:45.366650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.170 [2024-07-15 20:20:45.366668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.170 [2024-07-15 20:20:45.366809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.170 [2024-07-15 20:20:45.366825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.170 #52 NEW cov: 12184 ft: 15471 corp: 39/922b lim: 40 exec/s: 26 rss: 73Mb L: 30/40 MS: 1 InsertRepeatedBytes- 00:06:53.170 #52 DONE cov: 12184 ft: 15471 corp: 39/922b lim: 40 exec/s: 26 rss: 73Mb 00:06:53.170 ###### Recommended dictionary. ###### 00:06:53.170 "j\000\000\000" # Uses: 6 00:06:53.170 ###### End of recommended dictionary. ###### 00:06:53.170 Done 52 runs in 2 second(s) 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:53.170 20:20:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:53.429 [2024-07-15 20:20:45.569393] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:53.429 [2024-07-15 20:20:45.569471] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323064 ] 00:06:53.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.429 [2024-07-15 20:20:45.754053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.688 [2024-07-15 20:20:45.821304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.688 [2024-07-15 20:20:45.880567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.688 [2024-07-15 20:20:45.896853] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:53.688 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.688 INFO: Seed: 3306150699 00:06:53.688 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:53.688 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:53.688 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:53.688 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.688 #2 INITED exec/s: 0 rss: 63Mb 00:06:53.688 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.688 This may also happen if the target rejected all inputs we tried so far 00:06:53.689 [2024-07-15 20:20:45.941517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.689 [2024-07-15 20:20:45.941548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.947 NEW_FUNC[1/698]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:53.947 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:53.947 #9 NEW cov: 11969 ft: 11968 corp: 2/15b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:53.947 [2024-07-15 20:20:46.282335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.947 [2024-07-15 20:20:46.282371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.206 #10 NEW cov: 12082 ft: 12450 corp: 3/29b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 ShuffleBytes- 00:06:54.206 [2024-07-15 20:20:46.362435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.206 [2024-07-15 20:20:46.362470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.206 #11 NEW cov: 12088 ft: 12860 corp: 4/43b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 ShuffleBytes- 00:06:54.206 [2024-07-15 20:20:46.412698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.206 [2024-07-15 20:20:46.412727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.206 [2024-07-15 20:20:46.412777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.206 [2024-07-15 20:20:46.412792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.206 [2024-07-15 20:20:46.412822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:c20a1515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.206 [2024-07-15 20:20:46.412837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.206 #12 NEW cov: 12173 ft: 13860 corp: 5/74b lim: 40 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:54.206 [2024-07-15 20:20:46.492812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.206 [2024-07-15 20:20:46.492842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.206 #13 NEW cov: 12173 ft: 13970 corp: 6/88b lim: 40 exec/s: 0 rss: 70Mb L: 14/31 MS: 1 ChangeByte- 00:06:54.206 [2024-07-15 20:20:46.542934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.206 [2024-07-15 20:20:46.542963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.206 #14 NEW cov: 12173 ft: 14054 corp: 7/102b lim: 40 exec/s: 0 rss: 70Mb L: 14/31 MS: 1 CrossOver- 00:06:54.464 [2024-07-15 20:20:46.593155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.464 [2024-07-15 20:20:46.593184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.464 [2024-07-15 20:20:46.593232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.464 [2024-07-15 20:20:46.593247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.464 [2024-07-15 20:20:46.593277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.464 [2024-07-15 20:20:46.593292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.464 #15 NEW cov: 12173 ft: 14087 corp: 8/130b lim: 40 exec/s: 0 rss: 70Mb L: 28/31 MS: 1 InsertRepeatedBytes- 00:06:54.464 [2024-07-15 20:20:46.653213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.464 [2024-07-15 20:20:46.653242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.464 #16 NEW cov: 12173 ft: 14124 corp: 9/144b lim: 40 exec/s: 0 rss: 70Mb L: 14/31 MS: 1 CopyPart- 00:06:54.464 [2024-07-15 20:20:46.713404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1515150a cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.464 [2024-07-15 20:20:46.713433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.464 [2024-07-15 20:20:46.713494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.464 [2024-07-15 20:20:46.713511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.464 #17 NEW cov: 12173 ft: 14324 corp: 10/161b lim: 40 exec/s: 0 rss: 70Mb L: 17/31 MS: 1 CopyPart- 00:06:54.464 [2024-07-15 20:20:46.793538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.465 [2024-07-15 20:20:46.793568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.723 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:54.723 #18 NEW cov: 12196 ft: 14414 corp: 11/175b lim: 40 exec/s: 0 rss: 70Mb L: 14/31 MS: 1 ChangeBit- 00:06:54.723 [2024-07-15 20:20:46.873896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.723 [2024-07-15 20:20:46.873925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.723 [2024-07-15 20:20:46.873957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:320a1515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.723 [2024-07-15 20:20:46.873987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.723 [2024-07-15 20:20:46.874016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.723 [2024-07-15 20:20:46.874031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.723 #19 NEW cov: 12196 ft: 14428 corp: 12/199b lim: 40 exec/s: 19 rss: 70Mb L: 24/31 MS: 1 CopyPart- 00:06:54.723 [2024-07-15 20:20:46.954107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.723 [2024-07-15 20:20:46.954137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.723 [2024-07-15 20:20:46.954185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:320a1515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.723 [2024-07-15 20:20:46.954201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.723 [2024-07-15 20:20:46.954230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:15150a15 cdw11:15151560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.723 [2024-07-15 20:20:46.954245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.723 #20 NEW cov: 12196 ft: 14452 corp: 13/223b lim: 40 exec/s: 20 rss: 70Mb L: 24/31 MS: 1 CrossOver- 00:06:54.724 [2024-07-15 20:20:47.034276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15150a15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.724 [2024-07-15 20:20:47.034305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.724 [2024-07-15 20:20:47.034353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:150a1515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.724 [2024-07-15 20:20:47.034369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.724 [2024-07-15 20:20:47.034399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.724 [2024-07-15 20:20:47.034417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.724 #21 NEW cov: 12196 ft: 14474 corp: 14/247b lim: 40 exec/s: 21 rss: 70Mb L: 24/31 MS: 1 CrossOver- 00:06:54.724 [2024-07-15 20:20:47.094432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e1e1e1e1 cdw11:e1e17ae1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.724 [2024-07-15 20:20:47.094469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.724 [2024-07-15 20:20:47.094518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.724 [2024-07-15 20:20:47.094533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.724 [2024-07-15 20:20:47.094563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.724 [2024-07-15 20:20:47.094578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.982 #22 NEW cov: 12196 ft: 14500 corp: 15/276b lim: 40 exec/s: 22 rss: 70Mb L: 29/31 MS: 1 InsertByte- 00:06:54.982 [2024-07-15 20:20:47.174566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a051515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.982 [2024-07-15 20:20:47.174597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.982 #23 NEW cov: 12196 ft: 14584 corp: 16/290b lim: 40 exec/s: 23 rss: 70Mb L: 14/31 MS: 1 ChangeBit- 00:06:54.982 [2024-07-15 20:20:47.234723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.982 [2024-07-15 20:20:47.234752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.982 #24 NEW cov: 12196 ft: 14605 corp: 17/305b lim: 40 exec/s: 24 rss: 70Mb L: 15/31 MS: 1 InsertByte- 00:06:54.982 [2024-07-15 20:20:47.314925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151c15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.982 [2024-07-15 20:20:47.314954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.982 #25 NEW cov: 12196 ft: 14617 corp: 18/319b lim: 40 exec/s: 25 rss: 70Mb L: 14/31 MS: 1 ChangeBinInt- 00:06:55.241 [2024-07-15 20:20:47.365139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:0a151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.365169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.241 [2024-07-15 20:20:47.365204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.365220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.241 #27 NEW cov: 12196 ft: 14653 corp: 19/337b lim: 40 exec/s: 27 rss: 70Mb L: 18/31 MS: 2 ShuffleBytes-CrossOver- 00:06:55.241 [2024-07-15 20:20:47.425184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a051515 cdw11:15152c15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.425213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.241 #28 NEW cov: 12196 ft: 14669 corp: 20/351b lim: 40 exec/s: 28 rss: 70Mb L: 14/31 MS: 1 ChangeByte- 00:06:55.241 [2024-07-15 20:20:47.505492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:0a151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.505520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.241 [2024-07-15 20:20:47.505569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.505585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.241 #29 NEW cov: 12196 ft: 14715 corp: 21/369b lim: 40 exec/s: 29 rss: 71Mb L: 18/31 MS: 1 ChangeBit- 00:06:55.241 [2024-07-15 20:20:47.585827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.585855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.241 [2024-07-15 20:20:47.585904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.585919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.241 [2024-07-15 20:20:47.585949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.585964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.241 [2024-07-15 20:20:47.585993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffe1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.241 [2024-07-15 20:20:47.586008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.500 #30 NEW cov: 12196 ft: 15017 corp: 22/407b lim: 40 exec/s: 30 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:55.500 [2024-07-15 20:20:47.645994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.646024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.500 [2024-07-15 20:20:47.646060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:c2c2c2c2 cdw11:c2c2c215 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.646076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.500 [2024-07-15 20:20:47.646107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0a15c2c2 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.646122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.500 #31 NEW cov: 12196 ft: 15085 corp: 23/438b lim: 40 exec/s: 31 rss: 71Mb L: 31/38 MS: 1 ShuffleBytes- 00:06:55.500 [2024-07-15 20:20:47.726137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15150a15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.726165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.500 [2024-07-15 20:20:47.726214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:65151515 cdw11:15150a15 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.726230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.500 [2024-07-15 20:20:47.726263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:15151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.726278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.500 #32 NEW cov: 12196 ft: 15119 corp: 24/463b lim: 40 exec/s: 32 rss: 71Mb L: 25/38 MS: 1 InsertByte- 00:06:55.500 [2024-07-15 20:20:47.806318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a151515 cdw11:15151515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.806347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.500 [2024-07-15 20:20:47.806381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:15151515 cdw11:320a1515 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.500 [2024-07-15 20:20:47.806396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.500 #33 NEW cov: 12196 ft: 15136 corp: 25/480b lim: 40 exec/s: 33 rss: 71Mb L: 17/38 MS: 1 EraseBytes- 00:06:55.759 [2024-07-15 20:20:47.886658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-07-15 20:20:47.886687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.759 [2024-07-15 20:20:47.886735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-07-15 20:20:47.886750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.759 [2024-07-15 20:20:47.886780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-07-15 20:20:47.886795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.759 [2024-07-15 20:20:47.886824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffe1e1 cdw11:e1e1e1f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.759 [2024-07-15 20:20:47.886839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:55.759 #34 NEW cov: 12196 ft: 15153 corp: 26/518b lim: 40 exec/s: 17 rss: 71Mb L: 38/38 MS: 1 ChangeBit- 00:06:55.759 #34 DONE cov: 12196 ft: 15153 corp: 26/518b lim: 40 exec/s: 17 rss: 71Mb 00:06:55.759 Done 34 runs in 2 second(s) 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.759 20:20:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:55.760 [2024-07-15 20:20:48.117257] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:55.760 [2024-07-15 20:20:48.117328] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323462 ] 00:06:56.019 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.019 [2024-07-15 20:20:48.318620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.019 [2024-07-15 20:20:48.384244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.278 [2024-07-15 20:20:48.443842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.278 [2024-07-15 20:20:48.460127] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:56.278 INFO: Running with entropic power schedule (0xFF, 100). 00:06:56.278 INFO: Seed: 1574185180 00:06:56.278 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:56.278 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:56.278 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:56.278 INFO: A corpus is not provided, starting from an empty corpus 00:06:56.278 #2 INITED exec/s: 0 rss: 63Mb 00:06:56.278 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:56.278 This may also happen if the target rejected all inputs we tried so far 00:06:56.278 [2024-07-15 20:20:48.526875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.278 [2024-07-15 20:20:48.526914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.278 [2024-07-15 20:20:48.527050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.278 [2024-07-15 20:20:48.527069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.278 [2024-07-15 20:20:48.527198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.278 [2024-07-15 20:20:48.527218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.278 [2024-07-15 20:20:48.527348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.278 [2024-07-15 20:20:48.527367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.537 NEW_FUNC[1/698]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:56.537 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.537 #15 NEW cov: 11967 ft: 11967 corp: 2/36b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 3 ChangeByte-ChangeASCIIInt-InsertRepeatedBytes- 00:06:56.537 [2024-07-15 20:20:48.878106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.537 [2024-07-15 20:20:48.878154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.537 [2024-07-15 20:20:48.878292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.537 [2024-07-15 20:20:48.878313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.537 [2024-07-15 20:20:48.878447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.537 [2024-07-15 20:20:48.878468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.537 [2024-07-15 20:20:48.878595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.537 [2024-07-15 20:20:48.878616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.537 #16 NEW cov: 12080 ft: 12568 corp: 3/71b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:06:56.796 [2024-07-15 20:20:48.928123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.928153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:48.928286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000023d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.928305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:48.928437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.928458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:48.928582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.928598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.796 #17 NEW cov: 12086 ft: 12812 corp: 4/106b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:56.796 [2024-07-15 20:20:48.978138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d527d5 cdw11:d5d55bd5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.978168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:48.978295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.978313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:48.978437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.978461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:48.978587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:48.978607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.796 #18 NEW cov: 12171 ft: 13047 corp: 5/142b lim: 40 exec/s: 0 rss: 70Mb L: 36/36 MS: 1 InsertByte- 00:06:56.796 [2024-07-15 20:20:49.018368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.018396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.018534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000023d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.018553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.018674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.018690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.018816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.018834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.796 #19 NEW cov: 12171 ft: 13191 corp: 6/177b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 ChangeByte- 00:06:56.796 [2024-07-15 20:20:49.068056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.068084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.068221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.068236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.068369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.068387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.068515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.068531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.796 #20 NEW cov: 12171 ft: 13320 corp: 7/212b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 ShuffleBytes- 00:06:56.796 [2024-07-15 20:20:49.108146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.108172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.108299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.108320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.108447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.108465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.108592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d0d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.108608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.796 #21 NEW cov: 12171 ft: 13454 corp: 8/247b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 ChangeBinInt- 00:06:56.796 [2024-07-15 20:20:49.148779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5c3 cdw11:fb15f1c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.148806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.148932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3f2b00d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.148950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.149079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.149094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.796 [2024-07-15 20:20:49.149222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d0d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.796 [2024-07-15 20:20:49.149237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.056 #22 NEW cov: 12171 ft: 13505 corp: 9/282b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 CMP- DE: "\303\373\025\361\311?+\000"- 00:06:57.056 [2024-07-15 20:20:49.208882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5cfd5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.056 [2024-07-15 20:20:49.208912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.056 [2024-07-15 20:20:49.209044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.056 [2024-07-15 20:20:49.209062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.056 [2024-07-15 20:20:49.209186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.056 [2024-07-15 20:20:49.209204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.056 [2024-07-15 20:20:49.209324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.056 [2024-07-15 20:20:49.209342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.056 #23 NEW cov: 12171 ft: 13629 corp: 10/317b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 ChangeBinInt- 00:06:57.056 [2024-07-15 20:20:49.258700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.056 [2024-07-15 20:20:49.258729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.056 [2024-07-15 20:20:49.258873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.258890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.259014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.259033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.259152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.259170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.057 #24 NEW cov: 12171 ft: 13728 corp: 11/352b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 ChangeASCIIInt- 00:06:57.057 [2024-07-15 20:20:49.299096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.299122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.299244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.299263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.299387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.299404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.299530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d0d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.299548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.057 #25 NEW cov: 12171 ft: 13755 corp: 12/387b lim: 40 exec/s: 0 rss: 70Mb L: 35/36 MS: 1 ShuffleBytes- 00:06:57.057 [2024-07-15 20:20:49.339195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.339223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.339349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.339365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.339498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.339516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.339641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.339660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.057 #26 NEW cov: 12171 ft: 13778 corp: 13/422b lim: 40 exec/s: 0 rss: 71Mb L: 35/36 MS: 1 ShuffleBytes- 00:06:57.057 [2024-07-15 20:20:49.389376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.389403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.389530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.389548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.389672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.389688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.057 [2024-07-15 20:20:49.389815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:00d0d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.057 [2024-07-15 20:20:49.389832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.057 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:57.057 #27 NEW cov: 12194 ft: 13828 corp: 14/458b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 1 InsertByte- 00:06:57.316 [2024-07-15 20:20:49.439501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.439528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.439664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.439682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.439812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.439829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.439951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.439968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.316 #28 NEW cov: 12194 ft: 13837 corp: 15/497b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:57.316 [2024-07-15 20:20:49.478824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d50ad5d5 cdw11:c3fb15f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.478851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.316 #29 NEW cov: 12194 ft: 14677 corp: 16/509b lim: 40 exec/s: 29 rss: 71Mb L: 12/39 MS: 1 CrossOver- 00:06:57.316 [2024-07-15 20:20:49.529180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.529207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.529335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000023d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.529351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.316 #30 NEW cov: 12194 ft: 14918 corp: 17/529b lim: 40 exec/s: 30 rss: 71Mb L: 20/39 MS: 1 EraseBytes- 00:06:57.316 [2024-07-15 20:20:49.580140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.580166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.580298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.580317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.580446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.580463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.580602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d532d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.580619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.316 [2024-07-15 20:20:49.580744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:d5d0d5d5 cdw11:d5d5d531 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.580762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.316 #31 NEW cov: 12194 ft: 15008 corp: 18/569b lim: 40 exec/s: 31 rss: 71Mb L: 40/40 MS: 1 InsertByte- 00:06:57.316 [2024-07-15 20:20:49.640051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.316 [2024-07-15 20:20:49.640079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.317 [2024-07-15 20:20:49.640203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.317 [2024-07-15 20:20:49.640220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.317 [2024-07-15 20:20:49.640344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5c3fb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.317 [2024-07-15 20:20:49.640361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.317 [2024-07-15 20:20:49.640489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:15f1c93f cdw11:2b00d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.317 [2024-07-15 20:20:49.640505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.317 #32 NEW cov: 12194 ft: 15050 corp: 19/604b lim: 40 exec/s: 32 rss: 71Mb L: 35/40 MS: 1 PersAutoDict- DE: "\303\373\025\361\311?+\000"- 00:06:57.317 [2024-07-15 20:20:49.679429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d50ad5d5 cdw11:c3fbe4f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.317 [2024-07-15 20:20:49.679464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 #33 NEW cov: 12194 ft: 15071 corp: 20/616b lim: 40 exec/s: 33 rss: 71Mb L: 12/40 MS: 1 ChangeByte- 00:06:57.576 [2024-07-15 20:20:49.730382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.730409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.730544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.730561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.730682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.730698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.730822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:5bd5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.730838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.576 #34 NEW cov: 12194 ft: 15082 corp: 21/651b lim: 40 exec/s: 34 rss: 71Mb L: 35/40 MS: 1 CopyPart- 00:06:57.576 [2024-07-15 20:20:49.770567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5c3 cdw11:fb15f1c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.770595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.770718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3f2b00d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.770735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.770853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5f6d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.770871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.770988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d0d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.771006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.576 #35 NEW cov: 12194 ft: 15101 corp: 22/686b lim: 40 exec/s: 35 rss: 71Mb L: 35/40 MS: 1 ChangeByte- 00:06:57.576 [2024-07-15 20:20:49.810611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.810636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.810754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.810771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.810886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:41d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.810906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.811021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.811037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.576 #36 NEW cov: 12194 ft: 15114 corp: 23/722b lim: 40 exec/s: 36 rss: 71Mb L: 36/40 MS: 1 InsertByte- 00:06:57.576 [2024-07-15 20:20:49.860415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd52f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.860447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.860588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000023 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.860607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.576 #37 NEW cov: 12194 ft: 15155 corp: 24/743b lim: 40 exec/s: 37 rss: 71Mb L: 21/40 MS: 1 InsertByte- 00:06:57.576 [2024-07-15 20:20:49.910535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.910565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.910702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.910721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.910843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c3fb15f1 cdw11:c93f2b00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.910863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.576 #42 NEW cov: 12194 ft: 15418 corp: 25/770b lim: 40 exec/s: 42 rss: 71Mb L: 27/40 MS: 5 CrossOver-ShuffleBytes-InsertByte-ChangeBit-CrossOver- 00:06:57.576 [2024-07-15 20:20:49.950772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.950800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.950922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.950941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.951061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.951077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.576 [2024-07-15 20:20:49.951204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d531 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.576 [2024-07-15 20:20:49.951222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.836 #45 NEW cov: 12194 ft: 15431 corp: 26/803b lim: 40 exec/s: 45 rss: 71Mb L: 33/40 MS: 3 CrossOver-ShuffleBytes-CrossOver- 00:06:57.836 [2024-07-15 20:20:49.990797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:49.990827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:49.990953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:49.990972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:49.991097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c3fbffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:49.991114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:49.991239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:15f1c93f cdw11:2b00d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:49.991260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.836 #46 NEW cov: 12194 ft: 15452 corp: 27/836b lim: 40 exec/s: 46 rss: 72Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:06:57.836 [2024-07-15 20:20:50.061474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.061509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.061629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.061649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.061777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:5bd5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.061795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.061923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d500d0d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.061940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.836 #47 NEW cov: 12194 ft: 15465 corp: 28/873b lim: 40 exec/s: 47 rss: 72Mb L: 37/40 MS: 1 InsertByte- 00:06:57.836 [2024-07-15 20:20:50.121652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5c3 cdw11:fbfb15f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.121684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.121816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c93f2b00 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.121833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.121965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.121985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.122109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d0d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.122128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.836 #48 NEW cov: 12194 ft: 15487 corp: 29/908b lim: 40 exec/s: 48 rss: 72Mb L: 35/40 MS: 1 CopyPart- 00:06:57.836 [2024-07-15 20:20:50.161245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.161273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.161394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00d5d500 cdw11:23d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.161412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.161546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.161563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.836 [2024-07-15 20:20:50.161685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.836 [2024-07-15 20:20:50.161703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.837 #49 NEW cov: 12194 ft: 15504 corp: 30/943b lim: 40 exec/s: 49 rss: 72Mb L: 35/40 MS: 1 ShuffleBytes- 00:06:57.837 [2024-07-15 20:20:50.201586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.837 [2024-07-15 20:20:50.201615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.837 [2024-07-15 20:20:50.201731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5c3fb cdw11:15f1c93f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.837 [2024-07-15 20:20:50.201748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.837 [2024-07-15 20:20:50.201871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:2b00d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.837 [2024-07-15 20:20:50.201889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.837 [2024-07-15 20:20:50.202009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.837 [2024-07-15 20:20:50.202027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.096 #50 NEW cov: 12194 ft: 15512 corp: 31/978b lim: 40 exec/s: 50 rss: 72Mb L: 35/40 MS: 1 PersAutoDict- DE: "\303\373\025\361\311?+\000"- 00:06:58.096 [2024-07-15 20:20:50.241437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.096 [2024-07-15 20:20:50.241470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.096 [2024-07-15 20:20:50.241589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.096 [2024-07-15 20:20:50.241607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.096 [2024-07-15 20:20:50.241724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.096 [2024-07-15 20:20:50.241745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.096 [2024-07-15 20:20:50.241858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:00d0d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.096 [2024-07-15 20:20:50.241877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.096 #56 NEW cov: 12194 ft: 15523 corp: 32/1014b lim: 40 exec/s: 56 rss: 72Mb L: 36/40 MS: 1 ChangeASCIIInt- 00:06:58.096 [2024-07-15 20:20:50.281727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.096 [2024-07-15 20:20:50.281755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.281882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.281901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.282023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.282039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.282160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d531 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.282177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.097 #57 NEW cov: 12194 ft: 15532 corp: 33/1051b lim: 40 exec/s: 57 rss: 72Mb L: 37/40 MS: 1 CopyPart- 00:06:58.097 [2024-07-15 20:20:50.332372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.332400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.332532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.332550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.332678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.332696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.332826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.332845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.332968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d532 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.332984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:58.097 #58 NEW cov: 12194 ft: 15537 corp: 34/1091b lim: 40 exec/s: 58 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:58.097 [2024-07-15 20:20:50.372259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5c3 cdw11:fb15f1c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.372294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.372426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:3f2b00d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.372446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.372571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5f6d5d5 cdw11:d5d52ad5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.372587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.372716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d0d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.372733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.097 #59 NEW cov: 12194 ft: 15557 corp: 35/1127b lim: 40 exec/s: 59 rss: 72Mb L: 36/40 MS: 1 CrossOver- 00:06:58.097 [2024-07-15 20:20:50.422303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.422329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.422447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00d5d500 cdw11:23d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.422467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.422595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5bd50000 cdw11:d5d50023 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.422614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.422737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.422755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.097 #60 NEW cov: 12194 ft: 15562 corp: 36/1162b lim: 40 exec/s: 60 rss: 72Mb L: 35/40 MS: 1 CopyPart- 00:06:58.097 [2024-07-15 20:20:50.472456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d5d5d5 cdw11:d55bd500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.472483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.472613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000023d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.472630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.472753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d5d5d5c5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.472780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.097 [2024-07-15 20:20:50.472909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.097 [2024-07-15 20:20:50.472937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.357 #61 NEW cov: 12194 ft: 15567 corp: 37/1197b lim: 40 exec/s: 61 rss: 72Mb L: 35/40 MS: 1 ChangeBit- 00:06:58.357 [2024-07-15 20:20:50.512680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:d5d527d5 cdw11:d5d55bd5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.357 [2024-07-15 20:20:50.512707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.357 [2024-07-15 20:20:50.512849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d5d5d5d5 cdw11:d5ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.357 [2024-07-15 20:20:50.512866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.357 [2024-07-15 20:20:50.512990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffd5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.357 [2024-07-15 20:20:50.513007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.357 [2024-07-15 20:20:50.513132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d5d5d5d5 cdw11:d5d5d5d5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.357 [2024-07-15 20:20:50.513149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.357 #62 NEW cov: 12194 ft: 15590 corp: 38/1233b lim: 40 exec/s: 31 rss: 72Mb L: 36/40 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:58.357 #62 DONE cov: 12194 ft: 15590 corp: 38/1233b lim: 40 exec/s: 31 rss: 72Mb 00:06:58.357 ###### Recommended dictionary. ###### 00:06:58.357 "\303\373\025\361\311?+\000" # Uses: 2 00:06:58.357 "\377\377\377\377" # Uses: 0 00:06:58.357 ###### End of recommended dictionary. ###### 00:06:58.357 Done 62 runs in 2 second(s) 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.357 20:20:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:58.357 [2024-07-15 20:20:50.714483] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:58.357 [2024-07-15 20:20:50.714551] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323997 ] 00:06:58.616 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.616 [2024-07-15 20:20:50.890866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.616 [2024-07-15 20:20:50.956333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.875 [2024-07-15 20:20:51.015579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.875 [2024-07-15 20:20:51.031834] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:58.875 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.875 INFO: Seed: 4145182452 00:06:58.875 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:06:58.875 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:06:58.875 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:58.875 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.875 #2 INITED exec/s: 0 rss: 63Mb 00:06:58.875 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.875 This may also happen if the target rejected all inputs we tried so far 00:06:58.875 [2024-07-15 20:20:51.077483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.875 [2024-07-15 20:20:51.077511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.875 [2024-07-15 20:20:51.077575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.875 [2024-07-15 20:20:51.077588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.875 [2024-07-15 20:20:51.077650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.875 [2024-07-15 20:20:51.077663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.134 NEW_FUNC[1/697]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:59.134 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:59.134 #7 NEW cov: 11955 ft: 11952 corp: 2/29b lim: 40 exec/s: 0 rss: 70Mb L: 28/28 MS: 5 InsertByte-ChangeBit-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:06:59.134 [2024-07-15 20:20:51.388209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:28280aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.388250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.134 [2024-07-15 20:20:51.388320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.388338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.134 [2024-07-15 20:20:51.388404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.388426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.134 #11 NEW cov: 12068 ft: 12579 corp: 3/57b lim: 40 exec/s: 0 rss: 71Mb L: 28/28 MS: 4 CrossOver-ChangeByte-CopyPart-InsertRepeatedBytes- 00:06:59.134 [2024-07-15 20:20:51.428191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.428216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.134 [2024-07-15 20:20:51.428287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.428301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.134 [2024-07-15 20:20:51.428357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:08000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.428371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.134 #12 NEW cov: 12074 ft: 12764 corp: 4/85b lim: 40 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 ChangeBit- 00:06:59.134 [2024-07-15 20:20:51.478170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.478195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.134 [2024-07-15 20:20:51.478257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.134 [2024-07-15 20:20:51.478270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.134 #13 NEW cov: 12159 ft: 13251 corp: 5/106b lim: 40 exec/s: 0 rss: 71Mb L: 21/28 MS: 1 EraseBytes- 00:06:59.393 [2024-07-15 20:20:51.518214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.518240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.393 #14 NEW cov: 12159 ft: 13632 corp: 6/120b lim: 40 exec/s: 0 rss: 71Mb L: 14/28 MS: 1 EraseBytes- 00:06:59.393 [2024-07-15 20:20:51.558298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.558324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.393 #15 NEW cov: 12159 ft: 13714 corp: 7/129b lim: 40 exec/s: 0 rss: 71Mb L: 9/28 MS: 1 CMP- DE: "\000\000\000\000\000\000\004\000"- 00:06:59.393 [2024-07-15 20:20:51.598437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.598467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.393 #17 NEW cov: 12159 ft: 13801 corp: 8/139b lim: 40 exec/s: 0 rss: 71Mb L: 10/28 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:59.393 [2024-07-15 20:20:51.638784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.638809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.393 [2024-07-15 20:20:51.638872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.638886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.393 [2024-07-15 20:20:51.638960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.638974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.393 #18 NEW cov: 12159 ft: 13868 corp: 9/167b lim: 40 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 ChangeBit- 00:06:59.393 [2024-07-15 20:20:51.678665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.678690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.393 #19 NEW cov: 12159 ft: 13948 corp: 10/176b lim: 40 exec/s: 0 rss: 71Mb L: 9/28 MS: 1 ShuffleBytes- 00:06:59.393 [2024-07-15 20:20:51.728975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.729001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.393 [2024-07-15 20:20:51.729064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:04000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.729077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.393 [2024-07-15 20:20:51.729137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.393 [2024-07-15 20:20:51.729151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.393 #20 NEW cov: 12159 ft: 13978 corp: 11/204b lim: 40 exec/s: 0 rss: 71Mb L: 28/28 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\004\000"- 00:06:59.653 [2024-07-15 20:20:51.779022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.779048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.653 [2024-07-15 20:20:51.779106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.779120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.653 #21 NEW cov: 12159 ft: 14004 corp: 12/227b lim: 40 exec/s: 0 rss: 71Mb L: 23/28 MS: 1 EraseBytes- 00:06:59.653 [2024-07-15 20:20:51.819133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.819159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.653 [2024-07-15 20:20:51.819220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.819234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.653 #22 NEW cov: 12159 ft: 14079 corp: 13/248b lim: 40 exec/s: 0 rss: 71Mb L: 21/28 MS: 1 ShuffleBytes- 00:06:59.653 [2024-07-15 20:20:51.869301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.869329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.653 [2024-07-15 20:20:51.869388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.869402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.653 #23 NEW cov: 12159 ft: 14152 corp: 14/269b lim: 40 exec/s: 0 rss: 71Mb L: 21/28 MS: 1 ShuffleBytes- 00:06:59.653 [2024-07-15 20:20:51.919301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:62626262 cdw11:6262620a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.919326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.653 #24 NEW cov: 12159 ft: 14234 corp: 15/277b lim: 40 exec/s: 0 rss: 71Mb L: 8/28 MS: 1 InsertRepeatedBytes- 00:06:59.653 [2024-07-15 20:20:51.959399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.959424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.653 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:06:59.653 #25 NEW cov: 12182 ft: 14279 corp: 16/286b lim: 40 exec/s: 0 rss: 72Mb L: 9/28 MS: 1 ChangeByte- 00:06:59.653 [2024-07-15 20:20:51.999474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.653 [2024-07-15 20:20:51.999499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.653 #26 NEW cov: 12182 ft: 14322 corp: 17/299b lim: 40 exec/s: 0 rss: 72Mb L: 13/28 MS: 1 EraseBytes- 00:06:59.912 [2024-07-15 20:20:52.049907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.049933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.049990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.050004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.050061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.050074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.912 #27 NEW cov: 12182 ft: 14346 corp: 18/329b lim: 40 exec/s: 27 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:59.912 [2024-07-15 20:20:52.090161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.090186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.090245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.090259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.090315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.090329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.090387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.090400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.912 #28 NEW cov: 12182 ft: 14791 corp: 19/363b lim: 40 exec/s: 28 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:59.912 [2024-07-15 20:20:52.140153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.140178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.140237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.140251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.140310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffff000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.140323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.912 #29 NEW cov: 12182 ft: 14802 corp: 20/388b lim: 40 exec/s: 29 rss: 72Mb L: 25/34 MS: 1 InsertRepeatedBytes- 00:06:59.912 [2024-07-15 20:20:52.180407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.180432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.180498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.180512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.180590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.180605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.180662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.180675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.912 #30 NEW cov: 12182 ft: 14822 corp: 21/426b lim: 40 exec/s: 30 rss: 72Mb L: 38/38 MS: 1 CMP- DE: "\014\000\000\000"- 00:06:59.912 [2024-07-15 20:20:52.230691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.230716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.230774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.230791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.230849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:000a2cda SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.230862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.230918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.230932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.230987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:dadadada cdw11:dadadada SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.231000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.912 #31 NEW cov: 12182 ft: 14875 corp: 22/466b lim: 40 exec/s: 31 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:59.912 [2024-07-15 20:20:52.280722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.280748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.280810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.280824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.280884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.280898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.912 [2024-07-15 20:20:52.280955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.912 [2024-07-15 20:20:52.280969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.171 [2024-07-15 20:20:52.330562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.330586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.171 [2024-07-15 20:20:52.330644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.330658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.171 #33 NEW cov: 12182 ft: 14876 corp: 23/489b lim: 40 exec/s: 33 rss: 72Mb L: 23/40 MS: 2 CopyPart-CrossOver- 00:07:00.171 [2024-07-15 20:20:52.370707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.370731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.171 [2024-07-15 20:20:52.370789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.370803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.171 #34 NEW cov: 12182 ft: 14880 corp: 24/510b lim: 40 exec/s: 34 rss: 72Mb L: 21/40 MS: 1 EraseBytes- 00:07:00.171 [2024-07-15 20:20:52.420823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.420848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.171 [2024-07-15 20:20:52.420905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.420920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.171 #35 NEW cov: 12182 ft: 14898 corp: 25/531b lim: 40 exec/s: 35 rss: 72Mb L: 21/40 MS: 1 ShuffleBytes- 00:07:00.171 [2024-07-15 20:20:52.470827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.470853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.171 #36 NEW cov: 12182 ft: 14923 corp: 26/540b lim: 40 exec/s: 36 rss: 72Mb L: 9/40 MS: 1 EraseBytes- 00:07:00.171 [2024-07-15 20:20:52.521002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.171 [2024-07-15 20:20:52.521028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 #37 NEW cov: 12182 ft: 14936 corp: 27/549b lim: 40 exec/s: 37 rss: 73Mb L: 9/40 MS: 1 CopyPart- 00:07:00.430 [2024-07-15 20:20:52.571152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0d0000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.571177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 #38 NEW cov: 12182 ft: 14960 corp: 28/559b lim: 40 exec/s: 38 rss: 73Mb L: 10/40 MS: 1 CMP- DE: "\015\000"- 00:07:00.430 [2024-07-15 20:20:52.611390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.611415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.611481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.611495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.430 #39 NEW cov: 12182 ft: 15036 corp: 29/580b lim: 40 exec/s: 39 rss: 73Mb L: 21/40 MS: 1 PersAutoDict- DE: "\014\000\000\000"- 00:07:00.430 [2024-07-15 20:20:52.651526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00280000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.651552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.651610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.651624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.430 #40 NEW cov: 12182 ft: 15110 corp: 30/601b lim: 40 exec/s: 40 rss: 73Mb L: 21/40 MS: 1 ChangeByte- 00:07:00.430 [2024-07-15 20:20:52.701907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.701935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.701993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.702006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.702065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.702079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.702134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.702147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.430 #41 NEW cov: 12182 ft: 15145 corp: 31/639b lim: 40 exec/s: 41 rss: 73Mb L: 38/40 MS: 1 ChangeBit- 00:07:00.430 [2024-07-15 20:20:52.752054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.752078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.752135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.752148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.752221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:080c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.752235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.752289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000a2c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.752302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.430 #42 NEW cov: 12182 ft: 15167 corp: 32/671b lim: 40 exec/s: 42 rss: 73Mb L: 32/40 MS: 1 PersAutoDict- DE: "\014\000\000\000"- 00:07:00.430 [2024-07-15 20:20:52.802202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.802226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.802285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.802299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.802355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a7a7a7a3 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.802369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.430 [2024-07-15 20:20:52.802423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:a7a7a7a7 cdw11:a7a7a7a7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.430 [2024-07-15 20:20:52.802439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.689 #43 NEW cov: 12182 ft: 15174 corp: 33/709b lim: 40 exec/s: 43 rss: 73Mb L: 38/40 MS: 1 ChangeBinInt- 00:07:00.689 [2024-07-15 20:20:52.852064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.852089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.689 [2024-07-15 20:20:52.852150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.852163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.689 #47 NEW cov: 12182 ft: 15178 corp: 34/732b lim: 40 exec/s: 47 rss: 73Mb L: 23/40 MS: 4 InsertByte-CopyPart-CrossOver-CrossOver- 00:07:00.689 [2024-07-15 20:20:52.892328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.892353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.689 [2024-07-15 20:20:52.892411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.892424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.689 [2024-07-15 20:20:52.892485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffff000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.892498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.689 #48 NEW cov: 12182 ft: 15187 corp: 35/757b lim: 40 exec/s: 48 rss: 73Mb L: 25/40 MS: 1 ShuffleBytes- 00:07:00.689 [2024-07-15 20:20:52.932280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.932306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.689 [2024-07-15 20:20:52.932364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:003d0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.932377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.689 #49 NEW cov: 12182 ft: 15197 corp: 36/780b lim: 40 exec/s: 49 rss: 73Mb L: 23/40 MS: 1 ChangeByte- 00:07:00.689 [2024-07-15 20:20:52.972287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a090000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:52.972313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.689 #50 NEW cov: 12182 ft: 15226 corp: 37/789b lim: 40 exec/s: 50 rss: 73Mb L: 9/40 MS: 1 ChangeBinInt- 00:07:00.689 [2024-07-15 20:20:53.012537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:53.012563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.689 [2024-07-15 20:20:53.012634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:04000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:53.012651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.689 #51 NEW cov: 12182 ft: 15236 corp: 38/812b lim: 40 exec/s: 51 rss: 73Mb L: 23/40 MS: 1 ChangeBit- 00:07:00.689 [2024-07-15 20:20:53.052598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a008484 cdw11:84848484 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:53.052623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.689 [2024-07-15 20:20:53.052684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84848400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.689 [2024-07-15 20:20:53.052698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.948 #52 NEW cov: 12182 ft: 15250 corp: 39/830b lim: 40 exec/s: 26 rss: 73Mb L: 18/40 MS: 1 InsertRepeatedBytes- 00:07:00.948 #52 DONE cov: 12182 ft: 15250 corp: 39/830b lim: 40 exec/s: 26 rss: 73Mb 00:07:00.948 ###### Recommended dictionary. ###### 00:07:00.948 "\000\000\000\000\000\000\004\000" # Uses: 1 00:07:00.948 "\014\000\000\000" # Uses: 2 00:07:00.948 "\015\000" # Uses: 0 00:07:00.948 ###### End of recommended dictionary. ###### 00:07:00.948 Done 52 runs in 2 second(s) 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.948 20:20:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:00.948 [2024-07-15 20:20:53.254071] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:00.948 [2024-07-15 20:20:53.254142] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324409 ] 00:07:00.948 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.206 [2024-07-15 20:20:53.436720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.206 [2024-07-15 20:20:53.502622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.206 [2024-07-15 20:20:53.562025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.206 [2024-07-15 20:20:53.578303] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:01.464 INFO: Running with entropic power schedule (0xFF, 100). 00:07:01.464 INFO: Seed: 2395634273 00:07:01.464 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:01.464 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:01.464 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:01.464 INFO: A corpus is not provided, starting from an empty corpus 00:07:01.464 #2 INITED exec/s: 0 rss: 63Mb 00:07:01.464 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:01.464 This may also happen if the target rejected all inputs we tried so far 00:07:01.464 [2024-07-15 20:20:53.627310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.464 [2024-07-15 20:20:53.627340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.464 [2024-07-15 20:20:53.627400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.464 [2024-07-15 20:20:53.627415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.464 [2024-07-15 20:20:53.627477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.464 [2024-07-15 20:20:53.627493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.721 NEW_FUNC[1/698]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:01.721 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.722 #4 NEW cov: 11949 ft: 11948 corp: 2/28b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:01.722 [2024-07-15 20:20:53.948363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.948402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:53.948481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.948501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:53.948565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.948585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:53.948649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.948666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.722 #5 NEW cov: 12062 ft: 12896 corp: 3/60b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:07:01.722 [2024-07-15 20:20:53.998428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.998461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:53.998540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.998558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:53.998615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.998629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:53.998692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:53.998708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.722 #6 NEW cov: 12068 ft: 13131 corp: 4/92b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeByte- 00:07:01.722 [2024-07-15 20:20:54.048360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.048388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:54.048475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.048492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:54.048553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.048568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.722 #7 NEW cov: 12153 ft: 13393 corp: 5/119b lim: 35 exec/s: 0 rss: 70Mb L: 27/32 MS: 1 ChangeBit- 00:07:01.722 [2024-07-15 20:20:54.088646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.088673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:54.088753] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.088770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:54.088833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.088850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.722 [2024-07-15 20:20:54.088910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.722 [2024-07-15 20:20:54.088924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.979 #8 NEW cov: 12153 ft: 13447 corp: 6/147b lim: 35 exec/s: 0 rss: 70Mb L: 28/32 MS: 1 EraseBytes- 00:07:01.979 [2024-07-15 20:20:54.138797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.138824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.138904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.138920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.138984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.139000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.139065] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.139081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.979 #9 NEW cov: 12153 ft: 13549 corp: 7/179b lim: 35 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:07:01.979 [2024-07-15 20:20:54.178580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.178607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.178669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.178685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.979 #10 NEW cov: 12153 ft: 13889 corp: 8/195b lim: 35 exec/s: 0 rss: 71Mb L: 16/32 MS: 1 EraseBytes- 00:07:01.979 [2024-07-15 20:20:54.228849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.228876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.228956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.228972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.229037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.229054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.979 #11 NEW cov: 12153 ft: 13946 corp: 9/222b lim: 35 exec/s: 0 rss: 71Mb L: 27/32 MS: 1 ChangeByte- 00:07:01.979 [2024-07-15 20:20:54.268834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.268861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.268941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.268958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.979 #12 NEW cov: 12153 ft: 14066 corp: 10/238b lim: 35 exec/s: 0 rss: 71Mb L: 16/32 MS: 1 CopyPart- 00:07:01.979 [2024-07-15 20:20:54.319305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.319333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.319412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.319432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.319498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.319512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.979 [2024-07-15 20:20:54.319576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.979 [2024-07-15 20:20:54.319602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.979 #13 NEW cov: 12153 ft: 14104 corp: 11/270b lim: 35 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:01.979 [2024-07-15 20:20:54.359389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.980 [2024-07-15 20:20:54.359416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.980 [2024-07-15 20:20:54.359482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.980 [2024-07-15 20:20:54.359498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.980 [2024-07-15 20:20:54.359560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.980 [2024-07-15 20:20:54.359575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.980 [2024-07-15 20:20:54.359637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.980 [2024-07-15 20:20:54.359652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.238 #14 NEW cov: 12153 ft: 14109 corp: 12/303b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertByte- 00:07:02.238 [2024-07-15 20:20:54.399331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.399358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.399435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.399457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.399520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.399536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.238 #15 NEW cov: 12153 ft: 14122 corp: 13/330b lim: 35 exec/s: 0 rss: 71Mb L: 27/33 MS: 1 CopyPart- 00:07:02.238 [2024-07-15 20:20:54.439620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.439647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.439726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.439741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.439809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.439823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.439887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.439902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.238 #16 NEW cov: 12153 ft: 14144 corp: 14/362b lim: 35 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 ChangeByte- 00:07:02.238 [2024-07-15 20:20:54.489791] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.489819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.489880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000071 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.489895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.489974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.489990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.490054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.490069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.238 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:02.238 #17 NEW cov: 12176 ft: 14186 corp: 15/395b lim: 35 exec/s: 0 rss: 71Mb L: 33/33 MS: 1 InsertByte- 00:07:02.238 [2024-07-15 20:20:54.529529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.529558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.529624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.529639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.238 #18 NEW cov: 12176 ft: 14219 corp: 16/412b lim: 35 exec/s: 0 rss: 71Mb L: 17/33 MS: 1 InsertByte- 00:07:02.238 [2024-07-15 20:20:54.579705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000066 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.579732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.238 [2024-07-15 20:20:54.579811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.238 [2024-07-15 20:20:54.579828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.238 #22 NEW cov: 12176 ft: 14295 corp: 17/431b lim: 35 exec/s: 0 rss: 71Mb L: 19/33 MS: 4 CopyPart-ChangeByte-InsertByte-CrossOver- 00:07:02.497 [2024-07-15 20:20:54.620133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.620162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.620229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.620244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.620308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.620324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.620386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.620401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.497 #23 NEW cov: 12176 ft: 14340 corp: 18/461b lim: 35 exec/s: 23 rss: 71Mb L: 30/33 MS: 1 InsertRepeatedBytes- 00:07:02.497 [2024-07-15 20:20:54.660235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000066 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.660263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.660342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.660359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.660423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.660440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.660506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.660522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.497 #24 NEW cov: 12176 ft: 14347 corp: 19/495b lim: 35 exec/s: 24 rss: 71Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:02.497 [2024-07-15 20:20:54.710219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.710247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.710311] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.710328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.710391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.710406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.497 #25 NEW cov: 12176 ft: 14357 corp: 20/522b lim: 35 exec/s: 25 rss: 71Mb L: 27/34 MS: 1 EraseBytes- 00:07:02.497 [2024-07-15 20:20:54.760501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.760531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.497 [2024-07-15 20:20:54.760600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.497 [2024-07-15 20:20:54.760616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.760696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.760714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.760778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.760795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.498 #26 NEW cov: 12176 ft: 14375 corp: 21/554b lim: 35 exec/s: 26 rss: 71Mb L: 32/34 MS: 1 ChangeBit- 00:07:02.498 [2024-07-15 20:20:54.810700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.810728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.810792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.810807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.810869] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.810884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.810948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.810963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.498 #27 NEW cov: 12176 ft: 14396 corp: 22/588b lim: 35 exec/s: 27 rss: 72Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:02.498 [2024-07-15 20:20:54.860648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.860677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.860755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.860771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.498 [2024-07-15 20:20:54.860833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.498 [2024-07-15 20:20:54.860847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.756 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:02.756 #28 NEW cov: 12186 ft: 14422 corp: 23/613b lim: 35 exec/s: 28 rss: 72Mb L: 25/34 MS: 1 CrossOver- 00:07:02.756 [2024-07-15 20:20:54.900893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.900921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:54.900996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.901015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:54.901078] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.901093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:54.901157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.901173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.756 #29 NEW cov: 12186 ft: 14441 corp: 24/647b lim: 35 exec/s: 29 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:07:02.756 [2024-07-15 20:20:54.951073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000066 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.951099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:54.951178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.951194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:54.951259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.951274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:54.951335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:54.951351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.756 #30 NEW cov: 12186 ft: 14464 corp: 25/681b lim: 35 exec/s: 30 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:02.756 [2024-07-15 20:20:55.001013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:55.001041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:55.001121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:55.001139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.756 [2024-07-15 20:20:55.001205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.756 [2024-07-15 20:20:55.001220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.756 #31 NEW cov: 12186 ft: 14495 corp: 26/708b lim: 35 exec/s: 31 rss: 72Mb L: 27/34 MS: 1 ShuffleBytes- 00:07:02.757 [2024-07-15 20:20:55.051326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.051354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.051431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.051455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.051512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.051528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.051593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.051609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.757 #32 NEW cov: 12186 ft: 14509 corp: 27/740b lim: 35 exec/s: 32 rss: 72Mb L: 32/34 MS: 1 ChangeBinInt- 00:07:02.757 [2024-07-15 20:20:55.091654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.091681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.091743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.091759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.091818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.091834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.091893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.091909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.757 [2024-07-15 20:20:55.091969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.757 [2024-07-15 20:20:55.091984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.757 #33 NEW cov: 12186 ft: 14584 corp: 28/775b lim: 35 exec/s: 33 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:03.015 [2024-07-15 20:20:55.141836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.141863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.141926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.141941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.142004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.142020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.142079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.142093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.142154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.142172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.015 #34 NEW cov: 12186 ft: 14604 corp: 29/810b lim: 35 exec/s: 34 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:03.015 [2024-07-15 20:20:55.191814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000066 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.191841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.191921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.191937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.191999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.192015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.192075] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.192091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.015 #35 NEW cov: 12186 ft: 14609 corp: 30/844b lim: 35 exec/s: 35 rss: 72Mb L: 34/35 MS: 1 ChangeBit- 00:07:03.015 [2024-07-15 20:20:55.241948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.241975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.242053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.242070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.242132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.242146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.242209] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.242224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.015 #36 NEW cov: 12186 ft: 14646 corp: 31/875b lim: 35 exec/s: 36 rss: 72Mb L: 31/35 MS: 1 EraseBytes- 00:07:03.015 [2024-07-15 20:20:55.282027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.282053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.282114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.282130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.282192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.282207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.282270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000094 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.282286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.322215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.322241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.322306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.322322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.322398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.322414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.015 [2024-07-15 20:20:55.322480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000094 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.015 [2024-07-15 20:20:55.322497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.015 #38 NEW cov: 12186 ft: 14699 corp: 32/907b lim: 35 exec/s: 38 rss: 72Mb L: 32/35 MS: 2 ChangeBit-CopyPart- 00:07:03.015 [2024-07-15 20:20:55.362310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.016 [2024-07-15 20:20:55.362337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.016 [2024-07-15 20:20:55.362417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.016 [2024-07-15 20:20:55.362433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.016 [2024-07-15 20:20:55.362499] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.016 [2024-07-15 20:20:55.362515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.016 [2024-07-15 20:20:55.362576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.016 [2024-07-15 20:20:55.362591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.275 #39 NEW cov: 12186 ft: 14706 corp: 33/939b lim: 35 exec/s: 39 rss: 72Mb L: 32/35 MS: 1 CrossOver- 00:07:03.275 [2024-07-15 20:20:55.412077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.412105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.412166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.412181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.275 #40 NEW cov: 12186 ft: 14755 corp: 34/956b lim: 35 exec/s: 40 rss: 72Mb L: 17/35 MS: 1 ChangeBit- 00:07:03.275 [2024-07-15 20:20:55.462517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.462546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.462625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.462642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.462707] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.462723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.462787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.462801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.275 #41 NEW cov: 12186 ft: 14770 corp: 35/984b lim: 35 exec/s: 41 rss: 73Mb L: 28/35 MS: 1 EraseBytes- 00:07:03.275 [2024-07-15 20:20:55.502489] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.502517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.502583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.502600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.502663] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.502679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.275 #42 NEW cov: 12186 ft: 14795 corp: 36/1009b lim: 35 exec/s: 42 rss: 73Mb L: 25/35 MS: 1 CopyPart- 00:07:03.275 [2024-07-15 20:20:55.552779] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.552805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.552871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.552886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.552951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.552967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.553031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.553047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.275 #43 NEW cov: 12186 ft: 14799 corp: 37/1037b lim: 35 exec/s: 43 rss: 73Mb L: 28/35 MS: 1 InsertByte- 00:07:03.275 [2024-07-15 20:20:55.592904] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.592930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.593013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.593030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.593094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.275 [2024-07-15 20:20:55.593108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.275 [2024-07-15 20:20:55.593172] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.276 [2024-07-15 20:20:55.593188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.276 #44 NEW cov: 12186 ft: 14813 corp: 38/1069b lim: 35 exec/s: 22 rss: 73Mb L: 32/35 MS: 1 ChangeByte- 00:07:03.276 #44 DONE cov: 12186 ft: 14813 corp: 38/1069b lim: 35 exec/s: 22 rss: 73Mb 00:07:03.276 Done 44 runs in 2 second(s) 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.535 20:20:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:03.535 [2024-07-15 20:20:55.792674] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:03.535 [2024-07-15 20:20:55.792743] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324818 ] 00:07:03.535 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.794 [2024-07-15 20:20:55.974041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.794 [2024-07-15 20:20:56.040469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.794 [2024-07-15 20:20:56.099962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.794 [2024-07-15 20:20:56.116266] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:03.794 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.794 INFO: Seed: 640272686 00:07:03.794 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:03.794 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:03.794 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:03.794 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.794 #2 INITED exec/s: 0 rss: 64Mb 00:07:03.794 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.794 This may also happen if the target rejected all inputs we tried so far 00:07:04.053 [2024-07-15 20:20:56.182764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.053 [2024-07-15 20:20:56.182805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.053 [2024-07-15 20:20:56.182962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.053 [2024-07-15 20:20:56.182979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.053 [2024-07-15 20:20:56.183113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.053 [2024-07-15 20:20:56.183133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.312 NEW_FUNC[1/697]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:04.312 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:04.312 #9 NEW cov: 11937 ft: 11932 corp: 2/22b lim: 35 exec/s: 0 rss: 70Mb L: 21/21 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:04.312 [2024-07-15 20:20:56.534226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.534278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.534414] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.534438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.534597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.534621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.534766] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.534788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.534932] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.534957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.312 #14 NEW cov: 12050 ft: 13038 corp: 3/57b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 5 CrossOver-ShuffleBytes-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:04.312 [2024-07-15 20:20:56.593928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.593964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.594104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.594122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.312 NEW_FUNC[1/1]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:04.312 #15 NEW cov: 12070 ft: 13314 corp: 4/78b lim: 35 exec/s: 0 rss: 70Mb L: 21/35 MS: 1 InsertRepeatedBytes- 00:07:04.312 [2024-07-15 20:20:56.634277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.634306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.634440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.634463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.634597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.634616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.634743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.634759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.634885] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.634902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.312 #16 NEW cov: 12155 ft: 13573 corp: 5/113b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:07:04.312 [2024-07-15 20:20:56.684394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.684424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.684569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.684586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.684724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.684741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.684885] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.684903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.312 [2024-07-15 20:20:56.685034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.312 [2024-07-15 20:20:56.685053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.571 #17 NEW cov: 12155 ft: 13710 corp: 6/148b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:04.571 [2024-07-15 20:20:56.734241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001ce SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.734268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.734405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.734423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.571 #18 NEW cov: 12155 ft: 13836 corp: 7/170b lim: 35 exec/s: 0 rss: 71Mb L: 22/35 MS: 1 InsertByte- 00:07:04.571 [2024-07-15 20:20:56.784357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.784385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.784530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.784550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.784681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.784697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.784828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.784846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.784978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.784996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.571 #19 NEW cov: 12155 ft: 13966 corp: 8/205b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:04.571 [2024-07-15 20:20:56.834871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.834899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.835027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.835046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.835181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.835198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.835339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.835357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.835495] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.835514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.571 #20 NEW cov: 12155 ft: 14041 corp: 9/240b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:04.571 [2024-07-15 20:20:56.874893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.874921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.875058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.875075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.875200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.875218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.875342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.875359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.571 [2024-07-15 20:20:56.875490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.571 [2024-07-15 20:20:56.875508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.572 #26 NEW cov: 12155 ft: 14074 corp: 10/275b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:04.572 [2024-07-15 20:20:56.925020] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.572 [2024-07-15 20:20:56.925048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.572 [2024-07-15 20:20:56.925174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.572 [2024-07-15 20:20:56.925191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.572 [2024-07-15 20:20:56.925324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.572 [2024-07-15 20:20:56.925340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.572 [2024-07-15 20:20:56.925466] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.572 [2024-07-15 20:20:56.925497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.572 [2024-07-15 20:20:56.925625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.572 [2024-07-15 20:20:56.925641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.572 #27 NEW cov: 12155 ft: 14137 corp: 11/310b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:04.831 [2024-07-15 20:20:56.965171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.831 [2024-07-15 20:20:56.965200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.831 [2024-07-15 20:20:56.965341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000138 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.831 [2024-07-15 20:20:56.965362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.831 [2024-07-15 20:20:56.965493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.831 [2024-07-15 20:20:56.965509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.831 #28 NEW cov: 12155 ft: 14154 corp: 12/339b lim: 35 exec/s: 0 rss: 71Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:07:04.832 [2024-07-15 20:20:57.004966] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.004993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.005125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.005142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.005273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.005291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.005423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.005440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.005579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.005596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.832 #29 NEW cov: 12155 ft: 14199 corp: 13/374b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:04.832 [2024-07-15 20:20:57.044627] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.044657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.832 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:04.832 #30 NEW cov: 12178 ft: 14611 corp: 14/384b lim: 35 exec/s: 0 rss: 71Mb L: 10/35 MS: 1 CrossOver- 00:07:04.832 [2024-07-15 20:20:57.085210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.085238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.085384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.085403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.085547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.085564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.832 #31 NEW cov: 12178 ft: 14645 corp: 15/405b lim: 35 exec/s: 0 rss: 71Mb L: 21/35 MS: 1 ChangeByte- 00:07:04.832 [2024-07-15 20:20:57.135470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.135499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.135634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.135653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.135783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.135799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.135926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.135944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.136077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.136094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.832 #32 NEW cov: 12178 ft: 14683 corp: 16/440b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:04.832 [2024-07-15 20:20:57.175668] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001ce SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.175696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.175830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.175848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.832 [2024-07-15 20:20:57.175985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000487 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:04.832 [2024-07-15 20:20:57.176003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.832 #33 NEW cov: 12178 ft: 14709 corp: 17/473b lim: 35 exec/s: 33 rss: 71Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:05.092 [2024-07-15 20:20:57.235854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.235884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.236016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000138 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.236034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.236166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000138 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.236183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.092 #34 NEW cov: 12178 ft: 14761 corp: 18/503b lim: 35 exec/s: 34 rss: 71Mb L: 30/35 MS: 1 InsertByte- 00:07:05.092 [2024-07-15 20:20:57.285467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.285494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.285628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.285649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.092 #35 NEW cov: 12178 ft: 14861 corp: 19/520b lim: 35 exec/s: 35 rss: 71Mb L: 17/35 MS: 1 EraseBytes- 00:07:05.092 [2024-07-15 20:20:57.326215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.326242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.326375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.326391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.326520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.326536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.326680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.326698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.326827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.326844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.092 #36 NEW cov: 12178 ft: 14869 corp: 20/555b lim: 35 exec/s: 36 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:05.092 [2024-07-15 20:20:57.386409] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.092 [2024-07-15 20:20:57.386438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.092 [2024-07-15 20:20:57.386583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.386604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.386735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.386752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.386890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.386910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.387043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.387063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.093 #37 NEW cov: 12178 ft: 14873 corp: 21/590b lim: 35 exec/s: 37 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:05.093 [2024-07-15 20:20:57.436173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.436202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.436335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.436354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.436486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.436503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.436631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.436650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.093 [2024-07-15 20:20:57.436780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.093 [2024-07-15 20:20:57.436797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.093 #38 NEW cov: 12178 ft: 14933 corp: 22/625b lim: 35 exec/s: 38 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:05.352 [2024-07-15 20:20:57.476728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.476755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.476883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.476899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.477030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.477047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.477173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.477189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.477323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.477340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.352 #39 NEW cov: 12178 ft: 14937 corp: 23/660b lim: 35 exec/s: 39 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:05.352 [2024-07-15 20:20:57.526782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.526811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.526943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.526962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.527094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.527111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.527241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.527262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.352 [2024-07-15 20:20:57.527390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000730 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.352 [2024-07-15 20:20:57.527408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.352 #40 NEW cov: 12178 ft: 14977 corp: 24/695b lim: 35 exec/s: 40 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:05.352 [2024-07-15 20:20:57.576565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.576593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.576729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.576747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.576887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.576906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.353 #41 NEW cov: 12178 ft: 15000 corp: 25/716b lim: 35 exec/s: 41 rss: 72Mb L: 21/35 MS: 1 InsertRepeatedBytes- 00:07:05.353 [2024-07-15 20:20:57.626970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.627000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.627134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.627154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.627293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.627312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.627449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.627466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.353 #42 NEW cov: 12178 ft: 15034 corp: 26/749b lim: 35 exec/s: 42 rss: 72Mb L: 33/35 MS: 1 CrossOver- 00:07:05.353 [2024-07-15 20:20:57.676862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.676890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.677022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.677041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.677173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.677193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.677325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.677341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.677478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.677496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.353 #43 NEW cov: 12178 ft: 15036 corp: 27/784b lim: 35 exec/s: 43 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:05.353 [2024-07-15 20:20:57.717100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.717129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.717257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000138 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.717275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.353 [2024-07-15 20:20:57.717407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.353 [2024-07-15 20:20:57.717425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.612 #44 NEW cov: 12178 ft: 15064 corp: 28/818b lim: 35 exec/s: 44 rss: 72Mb L: 34/35 MS: 1 CrossOver- 00:07:05.612 [2024-07-15 20:20:57.767141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.767172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.767304] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.767323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.767460] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.767480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.612 #45 NEW cov: 12178 ft: 15070 corp: 29/845b lim: 35 exec/s: 45 rss: 72Mb L: 27/35 MS: 1 CrossOver- 00:07:05.612 [2024-07-15 20:20:57.827372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.827402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.827544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.827565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.827703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.827722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.612 #46 NEW cov: 12178 ft: 15071 corp: 30/872b lim: 35 exec/s: 46 rss: 72Mb L: 27/35 MS: 1 ChangeByte- 00:07:05.612 [2024-07-15 20:20:57.887755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.887787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.887915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.887932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.888053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.888072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.888213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.888230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.612 #47 NEW cov: 12178 ft: 15075 corp: 31/905b lim: 35 exec/s: 47 rss: 72Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:05.612 [2024-07-15 20:20:57.948140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.948168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.948313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.948331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.948453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.612 [2024-07-15 20:20:57.948473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.612 [2024-07-15 20:20:57.948597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.613 [2024-07-15 20:20:57.948616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.613 [2024-07-15 20:20:57.948740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.613 [2024-07-15 20:20:57.948760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.613 #48 NEW cov: 12178 ft: 15120 corp: 32/940b lim: 35 exec/s: 48 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:05.873 [2024-07-15 20:20:58.008404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.008432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.008579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.008598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.008721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.008739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.008866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.008885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.009025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.009041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.873 #49 NEW cov: 12178 ft: 15131 corp: 33/975b lim: 35 exec/s: 49 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:05.873 [2024-07-15 20:20:58.068248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.068275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.068404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.068422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.068562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.068582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.068722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.068739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.873 #50 NEW cov: 12178 ft: 15143 corp: 34/1008b lim: 35 exec/s: 50 rss: 73Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:05.873 [2024-07-15 20:20:58.128293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000005a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.128323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.128462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.128482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.873 [2024-07-15 20:20:58.128610] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.873 [2024-07-15 20:20:58.128628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.873 #51 NEW cov: 12178 ft: 15173 corp: 35/1035b lim: 35 exec/s: 25 rss: 73Mb L: 27/35 MS: 1 ChangeBinInt- 00:07:05.873 #51 DONE cov: 12178 ft: 15173 corp: 35/1035b lim: 35 exec/s: 25 rss: 73Mb 00:07:05.873 Done 51 runs in 2 second(s) 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:06.133 20:20:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:06.133 [2024-07-15 20:20:58.332533] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:06.133 [2024-07-15 20:20:58.332618] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325352 ] 00:07:06.133 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.392 [2024-07-15 20:20:58.517804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.392 [2024-07-15 20:20:58.583829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.392 [2024-07-15 20:20:58.643142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.392 [2024-07-15 20:20:58.659417] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:06.392 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.392 INFO: Seed: 3182267900 00:07:06.392 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:06.392 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:06.392 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:06.392 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.392 #2 INITED exec/s: 0 rss: 65Mb 00:07:06.392 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.392 This may also happen if the target rejected all inputs we tried so far 00:07:06.392 [2024-07-15 20:20:58.714625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.392 [2024-07-15 20:20:58.714656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.392 [2024-07-15 20:20:58.714715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.392 [2024-07-15 20:20:58.714731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.652 NEW_FUNC[1/697]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:06.652 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.652 #13 NEW cov: 12040 ft: 12030 corp: 2/53b lim: 105 exec/s: 0 rss: 71Mb L: 52/52 MS: 1 InsertRepeatedBytes- 00:07:06.911 [2024-07-15 20:20:59.045646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.045700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.045789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.045817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 NEW_FUNC[1/1]: 0x188ab60 in nvme_tcp_read_data /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h:412 00:07:06.911 #19 NEW cov: 12154 ft: 12545 corp: 3/106b lim: 105 exec/s: 0 rss: 71Mb L: 53/53 MS: 1 CrossOver- 00:07:06.911 [2024-07-15 20:20:59.105936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:168099840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.105962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.106019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.106035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.106087] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.106102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.106152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.106169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.106224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.106240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:06.911 #21 NEW cov: 12160 ft: 13411 corp: 4/211b lim: 105 exec/s: 0 rss: 71Mb L: 105/105 MS: 2 CMP-InsertRepeatedBytes- DE: "\005\000\000\000\000\000\000\000"- 00:07:06.911 [2024-07-15 20:20:59.145717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.145747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.145801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.145817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 #32 NEW cov: 12245 ft: 13716 corp: 5/256b lim: 105 exec/s: 0 rss: 71Mb L: 45/105 MS: 1 InsertRepeatedBytes- 00:07:06.911 [2024-07-15 20:20:59.185923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.185951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.185998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.186013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.186068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.186085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.911 #33 NEW cov: 12245 ft: 14132 corp: 6/327b lim: 105 exec/s: 0 rss: 72Mb L: 71/105 MS: 1 CrossOver- 00:07:06.911 [2024-07-15 20:20:59.235945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.235972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.236009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073575333887 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.236026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.911 #34 NEW cov: 12245 ft: 14262 corp: 7/380b lim: 105 exec/s: 0 rss: 72Mb L: 53/105 MS: 1 InsertByte- 00:07:06.911 [2024-07-15 20:20:59.276028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.276054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.911 [2024-07-15 20:20:59.276092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.911 [2024-07-15 20:20:59.276107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.171 #35 NEW cov: 12245 ft: 14326 corp: 8/425b lim: 105 exec/s: 0 rss: 72Mb L: 45/105 MS: 1 ShuffleBytes- 00:07:07.171 [2024-07-15 20:20:59.316271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.316298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.316362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.316378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.316434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.316456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.171 #36 NEW cov: 12245 ft: 14382 corp: 9/496b lim: 105 exec/s: 0 rss: 72Mb L: 71/105 MS: 1 ChangeBinInt- 00:07:07.171 [2024-07-15 20:20:59.366302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.366329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.366367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:72057589742960640 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.366383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.171 #37 NEW cov: 12245 ft: 14421 corp: 10/556b lim: 105 exec/s: 0 rss: 72Mb L: 60/105 MS: 1 PersAutoDict- DE: "\005\000\000\000\000\000\000\000"- 00:07:07.171 [2024-07-15 20:20:59.406271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.406300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.171 #38 NEW cov: 12245 ft: 14933 corp: 11/588b lim: 105 exec/s: 0 rss: 72Mb L: 32/105 MS: 1 EraseBytes- 00:07:07.171 [2024-07-15 20:20:59.446681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.446707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.446751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.446765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.446819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15996785876420001792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.446834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.171 #39 NEW cov: 12245 ft: 14947 corp: 12/659b lim: 105 exec/s: 0 rss: 72Mb L: 71/105 MS: 1 ChangeByte- 00:07:07.171 [2024-07-15 20:20:59.486636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.486662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.486707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:72057589742960640 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.486723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.171 #40 NEW cov: 12245 ft: 14967 corp: 13/719b lim: 105 exec/s: 0 rss: 72Mb L: 60/105 MS: 1 CMP- DE: "\000\014"- 00:07:07.171 [2024-07-15 20:20:59.536878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.536904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.536945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.536961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.171 [2024-07-15 20:20:59.537013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:25614222880669696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.171 [2024-07-15 20:20:59.537028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.430 #41 NEW cov: 12245 ft: 15001 corp: 14/790b lim: 105 exec/s: 0 rss: 72Mb L: 71/105 MS: 1 ChangeByte- 00:07:07.430 [2024-07-15 20:20:59.576860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.576886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.430 [2024-07-15 20:20:59.576926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.576940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.430 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:07.430 #42 NEW cov: 12268 ft: 15038 corp: 15/843b lim: 105 exec/s: 0 rss: 72Mb L: 53/105 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\177"- 00:07:07.430 [2024-07-15 20:20:59.627027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.627057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.430 [2024-07-15 20:20:59.627107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:72057589751349248 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.627122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.430 #43 NEW cov: 12268 ft: 15076 corp: 16/903b lim: 105 exec/s: 0 rss: 72Mb L: 60/105 MS: 1 ChangeBit- 00:07:07.430 [2024-07-15 20:20:59.667209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.667236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.430 [2024-07-15 20:20:59.667283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.667299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.430 [2024-07-15 20:20:59.667354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:656640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.667367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.430 #44 NEW cov: 12268 ft: 15166 corp: 17/982b lim: 105 exec/s: 44 rss: 72Mb L: 79/105 MS: 1 PersAutoDict- DE: "\005\000\000\000\000\000\000\000"- 00:07:07.430 [2024-07-15 20:20:59.717259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.717284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.430 [2024-07-15 20:20:59.717324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.717340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.430 #45 NEW cov: 12268 ft: 15189 corp: 18/1034b lim: 105 exec/s: 45 rss: 72Mb L: 52/105 MS: 1 ShuffleBytes- 00:07:07.430 [2024-07-15 20:20:59.757363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.430 [2024-07-15 20:20:59.757388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.431 [2024-07-15 20:20:59.757449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073575333887 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.431 [2024-07-15 20:20:59.757466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.431 #46 NEW cov: 12268 ft: 15229 corp: 19/1087b lim: 105 exec/s: 46 rss: 72Mb L: 53/105 MS: 1 ChangeBinInt- 00:07:07.431 [2024-07-15 20:20:59.807402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.431 [2024-07-15 20:20:59.807429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 #47 NEW cov: 12268 ft: 15257 corp: 20/1120b lim: 105 exec/s: 47 rss: 72Mb L: 33/105 MS: 1 InsertByte- 00:07:07.689 [2024-07-15 20:20:59.857519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.857545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 #48 NEW cov: 12268 ft: 15274 corp: 21/1152b lim: 105 exec/s: 48 rss: 72Mb L: 32/105 MS: 1 ChangeBinInt- 00:07:07.689 [2024-07-15 20:20:59.897767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.897793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-07-15 20:20:59.897849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744004990074879 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.897865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 #49 NEW cov: 12268 ft: 15284 corp: 22/1204b lim: 105 exec/s: 49 rss: 72Mb L: 52/105 MS: 1 ChangeBit- 00:07:07.689 [2024-07-15 20:20:59.937878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.937904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-07-15 20:20:59.937964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1793 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.937980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 #50 NEW cov: 12268 ft: 15291 corp: 23/1256b lim: 105 exec/s: 50 rss: 72Mb L: 52/105 MS: 1 ChangeBinInt- 00:07:07.689 [2024-07-15 20:20:59.977989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.978014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-07-15 20:20:59.978082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:360287970189639690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:20:59.978097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 #51 NEW cov: 12268 ft: 15299 corp: 24/1316b lim: 105 exec/s: 51 rss: 72Mb L: 60/105 MS: 1 EraseBytes- 00:07:07.689 [2024-07-15 20:21:00.018135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:21:00.018161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-07-15 20:21:00.018200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:21:00.018216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 #52 NEW cov: 12268 ft: 15375 corp: 25/1361b lim: 105 exec/s: 52 rss: 72Mb L: 45/105 MS: 1 ChangeBinInt- 00:07:07.689 [2024-07-15 20:21:00.058366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:21:00.058394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.689 [2024-07-15 20:21:00.058439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:21:00.058460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.689 [2024-07-15 20:21:00.058516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6510516211317496410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.689 [2024-07-15 20:21:00.058535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.949 #53 NEW cov: 12268 ft: 15395 corp: 26/1436b lim: 105 exec/s: 53 rss: 72Mb L: 75/105 MS: 1 InsertRepeatedBytes- 00:07:07.949 [2024-07-15 20:21:00.108333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069431361535 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.108363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.949 #55 NEW cov: 12268 ft: 15430 corp: 27/1472b lim: 105 exec/s: 55 rss: 72Mb L: 36/105 MS: 2 InsertByte-CrossOver- 00:07:07.949 [2024-07-15 20:21:00.148838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:168099840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.148866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.148921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.148937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.148990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.149005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.149059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.149075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.149128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.149144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.949 #56 NEW cov: 12268 ft: 15444 corp: 28/1577b lim: 105 exec/s: 56 rss: 73Mb L: 105/105 MS: 1 CrossOver- 00:07:07.949 [2024-07-15 20:21:00.198993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:168099840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.199019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.199074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.199089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.199145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.199160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.199214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.199229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.199284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.199298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:07.949 #57 NEW cov: 12268 ft: 15462 corp: 29/1682b lim: 105 exec/s: 57 rss: 73Mb L: 105/105 MS: 1 ChangeBinInt- 00:07:07.949 [2024-07-15 20:21:00.248874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.248900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.248953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.248968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.249025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.249040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.949 #58 NEW cov: 12268 ft: 15482 corp: 30/1754b lim: 105 exec/s: 58 rss: 73Mb L: 72/105 MS: 1 InsertByte- 00:07:07.949 [2024-07-15 20:21:00.288868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.288895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.949 [2024-07-15 20:21:00.288934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069415436287 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.949 [2024-07-15 20:21:00.288948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.949 #59 NEW cov: 12268 ft: 15514 corp: 31/1808b lim: 105 exec/s: 59 rss: 73Mb L: 54/105 MS: 1 PersAutoDict- DE: "\000\014"- 00:07:08.208 [2024-07-15 20:21:00.339210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.339236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.339282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.339299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.339350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:360287970190295040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.339365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 #60 NEW cov: 12268 ft: 15529 corp: 32/1887b lim: 105 exec/s: 60 rss: 73Mb L: 79/105 MS: 1 ShuffleBytes- 00:07:08.208 [2024-07-15 20:21:00.389316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.389343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.389408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.389424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.389484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6510516211317496410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.389500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.439456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.439483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.439545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.439562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.439614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6510615167363973210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.439630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 #62 NEW cov: 12268 ft: 15543 corp: 33/1961b lim: 105 exec/s: 62 rss: 73Mb L: 74/105 MS: 2 CrossOver-InsertByte- 00:07:08.208 [2024-07-15 20:21:00.479313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.479340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 #63 NEW cov: 12268 ft: 15607 corp: 34/1993b lim: 105 exec/s: 63 rss: 73Mb L: 32/105 MS: 1 ShuffleBytes- 00:07:08.208 [2024-07-15 20:21:00.519690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.519716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.519763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.519778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.208 [2024-07-15 20:21:00.519830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.208 [2024-07-15 20:21:00.519845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.208 #64 NEW cov: 12268 ft: 15614 corp: 35/2066b lim: 105 exec/s: 64 rss: 73Mb L: 73/105 MS: 1 InsertRepeatedBytes- 00:07:08.208 [2024-07-15 20:21:00.569830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.209 [2024-07-15 20:21:00.569857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.209 [2024-07-15 20:21:00.569907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:72057589742960640 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.209 [2024-07-15 20:21:00.569922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.209 [2024-07-15 20:21:00.569975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7306357456645743973 len:25958 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.209 [2024-07-15 20:21:00.569990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.467 #65 NEW cov: 12268 ft: 15621 corp: 36/2145b lim: 105 exec/s: 65 rss: 73Mb L: 79/105 MS: 1 InsertRepeatedBytes- 00:07:08.467 [2024-07-15 20:21:00.609703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:844424930131968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.467 [2024-07-15 20:21:00.609733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.467 #66 NEW cov: 12268 ft: 15628 corp: 37/2178b lim: 105 exec/s: 66 rss: 73Mb L: 33/105 MS: 1 InsertByte- 00:07:08.467 [2024-07-15 20:21:00.659968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2147483648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.467 [2024-07-15 20:21:00.659994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.467 [2024-07-15 20:21:00.660049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:360287970189639690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.467 [2024-07-15 20:21:00.660066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.467 #67 NEW cov: 12268 ft: 15699 corp: 38/2238b lim: 105 exec/s: 67 rss: 73Mb L: 60/105 MS: 1 ChangeBit- 00:07:08.467 [2024-07-15 20:21:00.710031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:196812581371904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.467 [2024-07-15 20:21:00.710058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.467 #68 NEW cov: 12268 ft: 15700 corp: 39/2270b lim: 105 exec/s: 34 rss: 73Mb L: 32/105 MS: 1 ChangeByte- 00:07:08.467 #68 DONE cov: 12268 ft: 15700 corp: 39/2270b lim: 105 exec/s: 34 rss: 73Mb 00:07:08.467 ###### Recommended dictionary. ###### 00:07:08.467 "\005\000\000\000\000\000\000\000" # Uses: 3 00:07:08.467 "\000\014" # Uses: 1 00:07:08.467 "\377\377\377\377\377\377\377\177" # Uses: 0 00:07:08.467 ###### End of recommended dictionary. ###### 00:07:08.467 Done 68 runs in 2 second(s) 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.725 20:21:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:08.725 [2024-07-15 20:21:00.916180] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:08.725 [2024-07-15 20:21:00.916249] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325831 ] 00:07:08.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.725 [2024-07-15 20:21:01.102512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.982 [2024-07-15 20:21:01.170780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.982 [2024-07-15 20:21:01.230448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.982 [2024-07-15 20:21:01.246739] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:08.982 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.982 INFO: Seed: 1476289889 00:07:08.982 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:08.982 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:08.982 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:08.982 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.982 #2 INITED exec/s: 0 rss: 64Mb 00:07:08.982 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.982 This may also happen if the target rejected all inputs we tried so far 00:07:08.982 [2024-07-15 20:21:01.301999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.982 [2024-07-15 20:21:01.302030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.982 [2024-07-15 20:21:01.302084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.982 [2024-07-15 20:21:01.302100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 NEW_FUNC[1/699]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:09.548 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.548 #3 NEW cov: 12055 ft: 12053 corp: 2/56b lim: 120 exec/s: 0 rss: 70Mb L: 55/55 MS: 1 InsertRepeatedBytes- 00:07:09.548 [2024-07-15 20:21:01.642820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.642855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.642911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.642927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 #4 NEW cov: 12175 ft: 12613 corp: 3/125b lim: 120 exec/s: 0 rss: 70Mb L: 69/69 MS: 1 InsertRepeatedBytes- 00:07:09.548 [2024-07-15 20:21:01.682893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.682921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.682967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.682983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 #5 NEW cov: 12181 ft: 12816 corp: 4/180b lim: 120 exec/s: 0 rss: 70Mb L: 55/69 MS: 1 ShuffleBytes- 00:07:09.548 [2024-07-15 20:21:01.733012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.733039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.733075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.733090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 #11 NEW cov: 12266 ft: 13166 corp: 5/235b lim: 120 exec/s: 0 rss: 70Mb L: 55/69 MS: 1 CopyPart- 00:07:09.548 [2024-07-15 20:21:01.773152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.773179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.773216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.773232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 #12 NEW cov: 12266 ft: 13253 corp: 6/290b lim: 120 exec/s: 0 rss: 71Mb L: 55/69 MS: 1 ChangeBinInt- 00:07:09.548 [2024-07-15 20:21:01.823264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.823290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.823330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.823344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 #13 NEW cov: 12266 ft: 13312 corp: 7/359b lim: 120 exec/s: 0 rss: 71Mb L: 69/69 MS: 1 ChangeByte- 00:07:09.548 [2024-07-15 20:21:01.873393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.873420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.873494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65529 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.873510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.548 #14 NEW cov: 12266 ft: 13447 corp: 8/414b lim: 120 exec/s: 0 rss: 71Mb L: 55/69 MS: 1 ChangeBinInt- 00:07:09.548 [2024-07-15 20:21:01.923500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.923534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.548 [2024-07-15 20:21:01.923585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:65 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.548 [2024-07-15 20:21:01.923599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.807 #15 NEW cov: 12266 ft: 13471 corp: 9/484b lim: 120 exec/s: 0 rss: 71Mb L: 70/70 MS: 1 InsertByte- 00:07:09.808 [2024-07-15 20:21:01.963652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:01.963678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.808 [2024-07-15 20:21:01.963725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:01.963741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.808 #21 NEW cov: 12266 ft: 13488 corp: 10/543b lim: 120 exec/s: 0 rss: 71Mb L: 59/70 MS: 1 CrossOver- 00:07:09.808 [2024-07-15 20:21:02.003593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.003620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.808 #24 NEW cov: 12266 ft: 14342 corp: 11/586b lim: 120 exec/s: 0 rss: 71Mb L: 43/70 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:07:09.808 [2024-07-15 20:21:02.043821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.043847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.808 [2024-07-15 20:21:02.043906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65529 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.043921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.808 #25 NEW cov: 12266 ft: 14395 corp: 12/641b lim: 120 exec/s: 0 rss: 71Mb L: 55/70 MS: 1 ShuffleBytes- 00:07:09.808 [2024-07-15 20:21:02.093999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.094025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.808 [2024-07-15 20:21:02.094061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.094075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.808 #26 NEW cov: 12266 ft: 14435 corp: 13/710b lim: 120 exec/s: 0 rss: 71Mb L: 69/70 MS: 1 ShuffleBytes- 00:07:09.808 [2024-07-15 20:21:02.144145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.144173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.808 [2024-07-15 20:21:02.144223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.144238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.808 #27 NEW cov: 12266 ft: 14460 corp: 14/780b lim: 120 exec/s: 0 rss: 71Mb L: 70/70 MS: 1 InsertByte- 00:07:09.808 [2024-07-15 20:21:02.184439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.184471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.808 [2024-07-15 20:21:02.184520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.184536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.808 [2024-07-15 20:21:02.184588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4259840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.808 [2024-07-15 20:21:02.184602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.067 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:10.067 #28 NEW cov: 12289 ft: 14858 corp: 15/872b lim: 120 exec/s: 0 rss: 72Mb L: 92/92 MS: 1 CopyPart- 00:07:10.067 [2024-07-15 20:21:02.234532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.234560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.067 [2024-07-15 20:21:02.234600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.234614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.067 [2024-07-15 20:21:02.234666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.234681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.067 #29 NEW cov: 12289 ft: 14930 corp: 16/958b lim: 120 exec/s: 0 rss: 72Mb L: 86/92 MS: 1 InsertRepeatedBytes- 00:07:10.067 [2024-07-15 20:21:02.284373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.284400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.067 #30 NEW cov: 12289 ft: 14948 corp: 17/999b lim: 120 exec/s: 30 rss: 72Mb L: 41/92 MS: 1 EraseBytes- 00:07:10.067 [2024-07-15 20:21:02.334959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.334987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.067 [2024-07-15 20:21:02.335030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.335045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.067 [2024-07-15 20:21:02.335115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:66 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.335129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.067 [2024-07-15 20:21:02.335181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.335196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.067 #31 NEW cov: 12289 ft: 15341 corp: 18/1097b lim: 120 exec/s: 31 rss: 72Mb L: 98/98 MS: 1 CrossOver- 00:07:10.067 [2024-07-15 20:21:02.374652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.374679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.067 #32 NEW cov: 12289 ft: 15347 corp: 19/1143b lim: 120 exec/s: 32 rss: 72Mb L: 46/98 MS: 1 EraseBytes- 00:07:10.067 [2024-07-15 20:21:02.414786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.067 [2024-07-15 20:21:02.414813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.067 #33 NEW cov: 12289 ft: 15364 corp: 20/1186b lim: 120 exec/s: 33 rss: 72Mb L: 43/98 MS: 1 ChangeByte- 00:07:10.326 [2024-07-15 20:21:02.455162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.455189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.455226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.455241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.455294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.455309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.326 #34 NEW cov: 12289 ft: 15374 corp: 21/1278b lim: 120 exec/s: 34 rss: 72Mb L: 92/98 MS: 1 CopyPart- 00:07:10.326 [2024-07-15 20:21:02.505525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.505553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.505600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7117029658895475298 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.505614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.505665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.505680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.505732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:7061644217367552610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.505747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.326 #35 NEW cov: 12289 ft: 15407 corp: 22/1383b lim: 120 exec/s: 35 rss: 72Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:07:10.326 [2024-07-15 20:21:02.555340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.555366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.555402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.555417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.326 #36 NEW cov: 12289 ft: 15422 corp: 23/1451b lim: 120 exec/s: 36 rss: 72Mb L: 68/105 MS: 1 CrossOver- 00:07:10.326 [2024-07-15 20:21:02.605468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.605494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.605536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.605550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.326 #37 NEW cov: 12289 ft: 15445 corp: 24/1506b lim: 120 exec/s: 37 rss: 72Mb L: 55/105 MS: 1 CopyPart- 00:07:10.326 [2024-07-15 20:21:02.655767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:38755368960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.655795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.655835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.655849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.655900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.655914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.326 #38 NEW cov: 12289 ft: 15462 corp: 25/1592b lim: 120 exec/s: 38 rss: 72Mb L: 86/105 MS: 1 ChangeBinInt- 00:07:10.326 [2024-07-15 20:21:02.696036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.696062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.696111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7117029658895475298 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.696126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.696178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.696193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.326 [2024-07-15 20:21:02.696244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:7061644217367552610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.326 [2024-07-15 20:21:02.696258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.585 #39 NEW cov: 12289 ft: 15503 corp: 26/1708b lim: 120 exec/s: 39 rss: 72Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:07:10.585 [2024-07-15 20:21:02.745902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.745929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.585 [2024-07-15 20:21:02.745972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:65 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.745987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.585 #40 NEW cov: 12289 ft: 15513 corp: 27/1778b lim: 120 exec/s: 40 rss: 72Mb L: 70/116 MS: 1 ShuffleBytes- 00:07:10.585 [2024-07-15 20:21:02.785995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.786021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.585 [2024-07-15 20:21:02.786068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.786083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.585 #46 NEW cov: 12289 ft: 15547 corp: 28/1833b lim: 120 exec/s: 46 rss: 73Mb L: 55/116 MS: 1 ChangeByte- 00:07:10.585 [2024-07-15 20:21:02.836437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.836469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.585 [2024-07-15 20:21:02.836521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.836536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.585 [2024-07-15 20:21:02.836584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.836599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.585 [2024-07-15 20:21:02.836650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.836664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.585 #47 NEW cov: 12289 ft: 15566 corp: 29/1952b lim: 120 exec/s: 47 rss: 73Mb L: 119/119 MS: 1 CrossOver- 00:07:10.585 [2024-07-15 20:21:02.876063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.876090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.585 #48 NEW cov: 12289 ft: 15604 corp: 30/1993b lim: 120 exec/s: 48 rss: 73Mb L: 41/119 MS: 1 ChangeByte- 00:07:10.585 [2024-07-15 20:21:02.926222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.926249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.585 #49 NEW cov: 12289 ft: 15610 corp: 31/2034b lim: 120 exec/s: 49 rss: 73Mb L: 41/119 MS: 1 CopyPart- 00:07:10.585 [2024-07-15 20:21:02.966396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.585 [2024-07-15 20:21:02.966423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.844 #50 NEW cov: 12289 ft: 15616 corp: 32/2076b lim: 120 exec/s: 50 rss: 73Mb L: 42/119 MS: 1 EraseBytes- 00:07:10.844 [2024-07-15 20:21:03.006459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.006484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.844 #51 NEW cov: 12289 ft: 15618 corp: 33/2112b lim: 120 exec/s: 51 rss: 73Mb L: 36/119 MS: 1 EraseBytes- 00:07:10.844 [2024-07-15 20:21:03.046599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.046626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.844 #52 NEW cov: 12289 ft: 15635 corp: 34/2158b lim: 120 exec/s: 52 rss: 73Mb L: 46/119 MS: 1 ChangeBinInt- 00:07:10.844 [2024-07-15 20:21:03.096693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.096719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.844 #53 NEW cov: 12289 ft: 15672 corp: 35/2192b lim: 120 exec/s: 53 rss: 73Mb L: 34/119 MS: 1 EraseBytes- 00:07:10.844 [2024-07-15 20:21:03.136940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.136969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.844 [2024-07-15 20:21:03.137027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.137043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.844 #54 NEW cov: 12289 ft: 15681 corp: 36/2262b lim: 120 exec/s: 54 rss: 73Mb L: 70/119 MS: 1 ShuffleBytes- 00:07:10.844 [2024-07-15 20:21:03.187371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100663296 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.187400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.844 [2024-07-15 20:21:03.187468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7117029658895475298 len:50373 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.187485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.844 [2024-07-15 20:21:03.187537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.187553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.844 [2024-07-15 20:21:03.187620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:7061644217367552610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.844 [2024-07-15 20:21:03.187638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.844 #55 NEW cov: 12289 ft: 15743 corp: 37/2367b lim: 120 exec/s: 55 rss: 73Mb L: 105/119 MS: 1 ChangeBinInt- 00:07:11.103 [2024-07-15 20:21:03.227386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.103 [2024-07-15 20:21:03.227414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.103 [2024-07-15 20:21:03.227457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.103 [2024-07-15 20:21:03.227472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.104 [2024-07-15 20:21:03.227527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.104 [2024-07-15 20:21:03.227542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.104 #56 NEW cov: 12289 ft: 15748 corp: 38/2459b lim: 120 exec/s: 56 rss: 73Mb L: 92/119 MS: 1 ChangeByte- 00:07:11.104 [2024-07-15 20:21:03.277495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.104 [2024-07-15 20:21:03.277523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.104 [2024-07-15 20:21:03.277564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.104 [2024-07-15 20:21:03.277579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.104 [2024-07-15 20:21:03.277633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4259840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.104 [2024-07-15 20:21:03.277649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.104 #57 NEW cov: 12289 ft: 15759 corp: 39/2551b lim: 120 exec/s: 28 rss: 73Mb L: 92/119 MS: 1 ChangeBinInt- 00:07:11.104 #57 DONE cov: 12289 ft: 15759 corp: 39/2551b lim: 120 exec/s: 28 rss: 73Mb 00:07:11.104 Done 57 runs in 2 second(s) 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.104 20:21:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:11.104 [2024-07-15 20:21:03.462412] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:11.104 [2024-07-15 20:21:03.462515] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326297 ] 00:07:11.363 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.363 [2024-07-15 20:21:03.638303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.363 [2024-07-15 20:21:03.703844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.621 [2024-07-15 20:21:03.763732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.621 [2024-07-15 20:21:03.780035] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:11.621 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.621 INFO: Seed: 4008310207 00:07:11.621 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:11.621 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:11.621 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:11.621 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.621 #2 INITED exec/s: 0 rss: 65Mb 00:07:11.621 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.621 This may also happen if the target rejected all inputs we tried so far 00:07:11.621 [2024-07-15 20:21:03.856169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:11.621 [2024-07-15 20:21:03.856212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.621 [2024-07-15 20:21:03.856330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:11.621 [2024-07-15 20:21:03.856356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.621 [2024-07-15 20:21:03.856477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:11.621 [2024-07-15 20:21:03.856498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.880 NEW_FUNC[1/697]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:11.880 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.880 #3 NEW cov: 12004 ft: 12005 corp: 2/62b lim: 100 exec/s: 0 rss: 71Mb L: 61/61 MS: 1 InsertRepeatedBytes- 00:07:11.880 [2024-07-15 20:21:04.207859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:11.880 [2024-07-15 20:21:04.207924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.880 [2024-07-15 20:21:04.208098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:11.880 [2024-07-15 20:21:04.208135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.880 [2024-07-15 20:21:04.208293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:11.880 [2024-07-15 20:21:04.208324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.880 [2024-07-15 20:21:04.208499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:11.880 [2024-07-15 20:21:04.208539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.880 #4 NEW cov: 12118 ft: 13005 corp: 3/142b lim: 100 exec/s: 0 rss: 71Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:07:12.140 [2024-07-15 20:21:04.277760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.140 [2024-07-15 20:21:04.277797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.277911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.140 [2024-07-15 20:21:04.277938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.278063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.140 [2024-07-15 20:21:04.278090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.278218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.140 [2024-07-15 20:21:04.278245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.140 #5 NEW cov: 12124 ft: 13238 corp: 4/227b lim: 100 exec/s: 0 rss: 71Mb L: 85/85 MS: 1 CrossOver- 00:07:12.140 [2024-07-15 20:21:04.337963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.140 [2024-07-15 20:21:04.337998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.338110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.140 [2024-07-15 20:21:04.338133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.338262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.140 [2024-07-15 20:21:04.338289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.338424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.140 [2024-07-15 20:21:04.338447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.140 #6 NEW cov: 12209 ft: 13469 corp: 5/311b lim: 100 exec/s: 0 rss: 72Mb L: 84/85 MS: 1 CrossOver- 00:07:12.140 [2024-07-15 20:21:04.408142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.140 [2024-07-15 20:21:04.408174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.408268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.140 [2024-07-15 20:21:04.408290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.408424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.140 [2024-07-15 20:21:04.408454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.408610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.140 [2024-07-15 20:21:04.408633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.140 #7 NEW cov: 12209 ft: 13575 corp: 6/391b lim: 100 exec/s: 0 rss: 72Mb L: 80/85 MS: 1 ShuffleBytes- 00:07:12.140 [2024-07-15 20:21:04.458112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.140 [2024-07-15 20:21:04.458142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.458262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.140 [2024-07-15 20:21:04.458285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.458413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.140 [2024-07-15 20:21:04.458445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.140 #8 NEW cov: 12209 ft: 13663 corp: 7/452b lim: 100 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ChangeBinInt- 00:07:12.140 [2024-07-15 20:21:04.508428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.140 [2024-07-15 20:21:04.508466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.508554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.140 [2024-07-15 20:21:04.508577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.508700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.140 [2024-07-15 20:21:04.508725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.140 [2024-07-15 20:21:04.508860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.140 [2024-07-15 20:21:04.508887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 #9 NEW cov: 12209 ft: 13767 corp: 8/537b lim: 100 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:12.436 [2024-07-15 20:21:04.558382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.436 [2024-07-15 20:21:04.558419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.558547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.436 [2024-07-15 20:21:04.558575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.558710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.436 [2024-07-15 20:21:04.558731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.436 #10 NEW cov: 12209 ft: 13813 corp: 9/600b lim: 100 exec/s: 0 rss: 72Mb L: 63/85 MS: 1 CMP- DE: "\001\000"- 00:07:12.436 [2024-07-15 20:21:04.608690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.436 [2024-07-15 20:21:04.608722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.608832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.436 [2024-07-15 20:21:04.608856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.608997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.436 [2024-07-15 20:21:04.609022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.609159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.436 [2024-07-15 20:21:04.609183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 #11 NEW cov: 12209 ft: 13853 corp: 10/685b lim: 100 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:12.436 [2024-07-15 20:21:04.668957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.436 [2024-07-15 20:21:04.668988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.669060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.436 [2024-07-15 20:21:04.669086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.669224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.436 [2024-07-15 20:21:04.669252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.669380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.436 [2024-07-15 20:21:04.669401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 #12 NEW cov: 12209 ft: 13907 corp: 11/765b lim: 100 exec/s: 0 rss: 72Mb L: 80/85 MS: 1 CrossOver- 00:07:12.436 [2024-07-15 20:21:04.729166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.436 [2024-07-15 20:21:04.729200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.729329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.436 [2024-07-15 20:21:04.729358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.729497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.436 [2024-07-15 20:21:04.729524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.436 [2024-07-15 20:21:04.729655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.436 [2024-07-15 20:21:04.729680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.436 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:12.436 #13 NEW cov: 12232 ft: 14063 corp: 12/849b lim: 100 exec/s: 0 rss: 72Mb L: 84/85 MS: 1 ChangeBit- 00:07:12.437 [2024-07-15 20:21:04.789076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.437 [2024-07-15 20:21:04.789111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.437 [2024-07-15 20:21:04.789222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.437 [2024-07-15 20:21:04.789242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.437 [2024-07-15 20:21:04.789364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.437 [2024-07-15 20:21:04.789388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.763 #14 NEW cov: 12232 ft: 14070 corp: 13/910b lim: 100 exec/s: 0 rss: 72Mb L: 61/85 MS: 1 ShuffleBytes- 00:07:12.763 [2024-07-15 20:21:04.839582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.763 [2024-07-15 20:21:04.839614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.839691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.763 [2024-07-15 20:21:04.839718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.839856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.763 [2024-07-15 20:21:04.839883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.840018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.763 [2024-07-15 20:21:04.840043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.763 #15 NEW cov: 12232 ft: 14106 corp: 14/995b lim: 100 exec/s: 15 rss: 72Mb L: 85/85 MS: 1 CopyPart- 00:07:12.763 [2024-07-15 20:21:04.889409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.763 [2024-07-15 20:21:04.889446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.889536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.763 [2024-07-15 20:21:04.889557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.889690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.763 [2024-07-15 20:21:04.889714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.763 #16 NEW cov: 12232 ft: 14140 corp: 15/1058b lim: 100 exec/s: 16 rss: 73Mb L: 63/85 MS: 1 ChangeByte- 00:07:12.763 [2024-07-15 20:21:04.949626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.763 [2024-07-15 20:21:04.949661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.949779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.763 [2024-07-15 20:21:04.949806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:04.949946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.763 [2024-07-15 20:21:04.949968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.763 #17 NEW cov: 12232 ft: 14172 corp: 16/1121b lim: 100 exec/s: 17 rss: 73Mb L: 63/85 MS: 1 ChangeByte- 00:07:12.763 [2024-07-15 20:21:04.999856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.763 [2024-07-15 20:21:04.999889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.000008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.763 [2024-07-15 20:21:05.000033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.000161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.763 [2024-07-15 20:21:05.000187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.763 #18 NEW cov: 12232 ft: 14189 corp: 17/1184b lim: 100 exec/s: 18 rss: 73Mb L: 63/85 MS: 1 CopyPart- 00:07:12.763 [2024-07-15 20:21:05.060207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.763 [2024-07-15 20:21:05.060240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.060326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.763 [2024-07-15 20:21:05.060352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.060482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.763 [2024-07-15 20:21:05.060512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.060642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:12.763 [2024-07-15 20:21:05.060668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.763 #19 NEW cov: 12232 ft: 14204 corp: 18/1269b lim: 100 exec/s: 19 rss: 73Mb L: 85/85 MS: 1 ChangeBinInt- 00:07:12.763 [2024-07-15 20:21:05.110537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.763 [2024-07-15 20:21:05.110567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.110693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.763 [2024-07-15 20:21:05.110716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.763 [2024-07-15 20:21:05.110849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.763 [2024-07-15 20:21:05.110877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.023 #20 NEW cov: 12241 ft: 14301 corp: 19/1346b lim: 100 exec/s: 20 rss: 73Mb L: 77/85 MS: 1 CrossOver- 00:07:13.023 [2024-07-15 20:21:05.170537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.023 [2024-07-15 20:21:05.170573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.170683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.023 [2024-07-15 20:21:05.170711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.170853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.023 [2024-07-15 20:21:05.170882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.171012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.023 [2024-07-15 20:21:05.171039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.023 #21 NEW cov: 12241 ft: 14312 corp: 20/1426b lim: 100 exec/s: 21 rss: 73Mb L: 80/85 MS: 1 CMP- DE: "\000\000\000\000\000\000\000?"- 00:07:13.023 [2024-07-15 20:21:05.220434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.023 [2024-07-15 20:21:05.220470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.220562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.023 [2024-07-15 20:21:05.220589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.220723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.023 [2024-07-15 20:21:05.220745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.023 #22 NEW cov: 12241 ft: 14346 corp: 21/1489b lim: 100 exec/s: 22 rss: 73Mb L: 63/85 MS: 1 ShuffleBytes- 00:07:13.023 [2024-07-15 20:21:05.270701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.023 [2024-07-15 20:21:05.270732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.270822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.023 [2024-07-15 20:21:05.270845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.270983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.023 [2024-07-15 20:21:05.271008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.023 #23 NEW cov: 12241 ft: 14385 corp: 22/1561b lim: 100 exec/s: 23 rss: 73Mb L: 72/85 MS: 1 CrossOver- 00:07:13.023 [2024-07-15 20:21:05.321113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.023 [2024-07-15 20:21:05.321147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.321233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.023 [2024-07-15 20:21:05.321253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.321384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.023 [2024-07-15 20:21:05.321413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.321545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.023 [2024-07-15 20:21:05.321572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.023 #24 NEW cov: 12241 ft: 14410 corp: 23/1651b lim: 100 exec/s: 24 rss: 73Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:13.023 [2024-07-15 20:21:05.381229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.023 [2024-07-15 20:21:05.381267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.381385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.023 [2024-07-15 20:21:05.381413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.381550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.023 [2024-07-15 20:21:05.381590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.023 [2024-07-15 20:21:05.381724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.023 [2024-07-15 20:21:05.381749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.282 #25 NEW cov: 12241 ft: 14418 corp: 24/1737b lim: 100 exec/s: 25 rss: 73Mb L: 86/90 MS: 1 InsertByte- 00:07:13.282 [2024-07-15 20:21:05.441225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.282 [2024-07-15 20:21:05.441260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.441363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.282 [2024-07-15 20:21:05.441383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.441515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.282 [2024-07-15 20:21:05.441551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.282 #26 NEW cov: 12241 ft: 14428 corp: 25/1800b lim: 100 exec/s: 26 rss: 73Mb L: 63/90 MS: 1 ChangeBit- 00:07:13.282 [2024-07-15 20:21:05.491637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.282 [2024-07-15 20:21:05.491669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.491770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.282 [2024-07-15 20:21:05.491791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.491922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.282 [2024-07-15 20:21:05.491948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.492087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.282 [2024-07-15 20:21:05.492108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.282 #27 NEW cov: 12241 ft: 14431 corp: 26/1881b lim: 100 exec/s: 27 rss: 73Mb L: 81/90 MS: 1 InsertByte- 00:07:13.282 [2024-07-15 20:21:05.541580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.282 [2024-07-15 20:21:05.541616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.541725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.282 [2024-07-15 20:21:05.541749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.541879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.282 [2024-07-15 20:21:05.541901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.282 #28 NEW cov: 12241 ft: 14447 corp: 27/1958b lim: 100 exec/s: 28 rss: 73Mb L: 77/90 MS: 1 ShuffleBytes- 00:07:13.282 [2024-07-15 20:21:05.601796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.282 [2024-07-15 20:21:05.601830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.601939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.282 [2024-07-15 20:21:05.601961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.602106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.282 [2024-07-15 20:21:05.602134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.282 #29 NEW cov: 12241 ft: 14456 corp: 28/2027b lim: 100 exec/s: 29 rss: 73Mb L: 69/90 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000?"- 00:07:13.282 [2024-07-15 20:21:05.651897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.282 [2024-07-15 20:21:05.651934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.652048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.282 [2024-07-15 20:21:05.652074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.282 [2024-07-15 20:21:05.652205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.282 [2024-07-15 20:21:05.652231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.541 #30 NEW cov: 12241 ft: 14467 corp: 29/2090b lim: 100 exec/s: 30 rss: 73Mb L: 63/90 MS: 1 ChangeBinInt- 00:07:13.541 [2024-07-15 20:21:05.702311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.541 [2024-07-15 20:21:05.702346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.702460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.541 [2024-07-15 20:21:05.702479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.702601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.541 [2024-07-15 20:21:05.702623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.702756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:13.541 [2024-07-15 20:21:05.702778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.541 #31 NEW cov: 12241 ft: 14476 corp: 30/2183b lim: 100 exec/s: 31 rss: 73Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:07:13.541 [2024-07-15 20:21:05.752305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.541 [2024-07-15 20:21:05.752338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.752460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.541 [2024-07-15 20:21:05.752486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.752617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.541 [2024-07-15 20:21:05.752643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.541 #32 NEW cov: 12241 ft: 14493 corp: 31/2260b lim: 100 exec/s: 32 rss: 73Mb L: 77/93 MS: 1 ChangeByte- 00:07:13.541 [2024-07-15 20:21:05.812517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.541 [2024-07-15 20:21:05.812550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.812667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.541 [2024-07-15 20:21:05.812686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.541 [2024-07-15 20:21:05.812830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.541 [2024-07-15 20:21:05.812854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.541 #33 NEW cov: 12241 ft: 14527 corp: 32/2323b lim: 100 exec/s: 16 rss: 73Mb L: 63/93 MS: 1 ChangeBinInt- 00:07:13.541 #33 DONE cov: 12241 ft: 14527 corp: 32/2323b lim: 100 exec/s: 16 rss: 73Mb 00:07:13.541 ###### Recommended dictionary. ###### 00:07:13.541 "\001\000" # Uses: 0 00:07:13.541 "\000\000\000\000\000\000\000?" # Uses: 1 00:07:13.541 ###### End of recommended dictionary. ###### 00:07:13.541 Done 33 runs in 2 second(s) 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.800 20:21:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:13.800 [2024-07-15 20:21:06.001210] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:13.800 [2024-07-15 20:21:06.001294] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327264 ] 00:07:13.800 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.800 [2024-07-15 20:21:06.179109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.058 [2024-07-15 20:21:06.245374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.058 [2024-07-15 20:21:06.304907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.058 [2024-07-15 20:21:06.321216] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:14.058 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.058 INFO: Seed: 2253319724 00:07:14.058 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:14.058 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:14.058 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.058 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.058 #2 INITED exec/s: 0 rss: 64Mb 00:07:14.058 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.058 This may also happen if the target rejected all inputs we tried so far 00:07:14.058 [2024-07-15 20:21:06.369841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:07:14.058 [2024-07-15 20:21:06.369871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.058 [2024-07-15 20:21:06.369922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:14.058 [2024-07-15 20:21:06.369938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.317 NEW_FUNC[1/697]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:14.317 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.317 #24 NEW cov: 11982 ft: 11981 corp: 2/28b lim: 50 exec/s: 0 rss: 70Mb L: 27/27 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:14.576 [2024-07-15 20:21:06.710683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11212726789901884315 len:39836 00:07:14.576 [2024-07-15 20:21:06.710716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.576 [2024-07-15 20:21:06.710772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11212726789901884315 len:39836 00:07:14.576 [2024-07-15 20:21:06.710789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.576 #41 NEW cov: 12096 ft: 12649 corp: 3/56b lim: 50 exec/s: 0 rss: 70Mb L: 28/28 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:14.576 [2024-07-15 20:21:06.750724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:14.576 [2024-07-15 20:21:06.750751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.576 [2024-07-15 20:21:06.750808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:14.576 [2024-07-15 20:21:06.750822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.576 #42 NEW cov: 12102 ft: 12810 corp: 4/83b lim: 50 exec/s: 0 rss: 70Mb L: 27/28 MS: 1 CrossOver- 00:07:14.576 [2024-07-15 20:21:06.800874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070135952127 len:65536 00:07:14.576 [2024-07-15 20:21:06.800903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.576 [2024-07-15 20:21:06.800971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:14.576 [2024-07-15 20:21:06.800986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.576 #43 NEW cov: 12187 ft: 13005 corp: 5/110b lim: 50 exec/s: 0 rss: 70Mb L: 27/28 MS: 1 ChangeByte- 00:07:14.576 [2024-07-15 20:21:06.840966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446634118973227007 len:65536 00:07:14.576 [2024-07-15 20:21:06.840994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.576 [2024-07-15 20:21:06.841032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:14.576 [2024-07-15 20:21:06.841047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.576 #49 NEW cov: 12187 ft: 13094 corp: 6/137b lim: 50 exec/s: 0 rss: 71Mb L: 27/28 MS: 1 ShuffleBytes- 00:07:14.576 [2024-07-15 20:21:06.891094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:14.576 [2024-07-15 20:21:06.891122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.576 [2024-07-15 20:21:06.891172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:14.576 [2024-07-15 20:21:06.891187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.576 #50 NEW cov: 12187 ft: 13144 corp: 7/164b lim: 50 exec/s: 0 rss: 71Mb L: 27/28 MS: 1 ShuffleBytes- 00:07:14.576 [2024-07-15 20:21:06.931211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:14.576 [2024-07-15 20:21:06.931238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.576 [2024-07-15 20:21:06.931275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446633693050044415 len:39836 00:07:14.576 [2024-07-15 20:21:06.931290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.834 #51 NEW cov: 12187 ft: 13227 corp: 8/191b lim: 50 exec/s: 0 rss: 71Mb L: 27/28 MS: 1 CopyPart- 00:07:14.834 [2024-07-15 20:21:06.981372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:14.834 [2024-07-15 20:21:06.981399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:06.981456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18418486195378978815 len:39836 00:07:14.834 [2024-07-15 20:21:06.981473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.834 #52 NEW cov: 12187 ft: 13250 corp: 9/217b lim: 50 exec/s: 0 rss: 71Mb L: 26/28 MS: 1 EraseBytes- 00:07:14.834 [2024-07-15 20:21:07.031541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446634118973227007 len:65536 00:07:14.834 [2024-07-15 20:21:07.031568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:07.031605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644196044799 len:39836 00:07:14.834 [2024-07-15 20:21:07.031621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.834 #53 NEW cov: 12187 ft: 13287 corp: 10/244b lim: 50 exec/s: 0 rss: 71Mb L: 27/28 MS: 1 ChangeBit- 00:07:14.834 [2024-07-15 20:21:07.081778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:07:14.834 [2024-07-15 20:21:07.081806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:07.081841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18418596576038486015 len:65536 00:07:14.834 [2024-07-15 20:21:07.081856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:07.081911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:39836 00:07:14.834 [2024-07-15 20:21:07.081926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.834 #54 NEW cov: 12187 ft: 13599 corp: 11/282b lim: 50 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:14.834 [2024-07-15 20:21:07.121719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:14.834 [2024-07-15 20:21:07.121745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:07.121795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446633693050044415 len:39836 00:07:14.834 [2024-07-15 20:21:07.121811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.834 #55 NEW cov: 12187 ft: 13624 corp: 12/309b lim: 50 exec/s: 0 rss: 71Mb L: 27/38 MS: 1 ChangeBinInt- 00:07:14.834 [2024-07-15 20:21:07.161839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:14.834 [2024-07-15 20:21:07.161866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:07.161915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446633693050044415 len:38300 00:07:14.834 [2024-07-15 20:21:07.161931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.834 #56 NEW cov: 12187 ft: 13660 corp: 13/336b lim: 50 exec/s: 0 rss: 71Mb L: 27/38 MS: 1 ChangeBinInt- 00:07:14.834 [2024-07-15 20:21:07.202001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095938028059 len:65536 00:07:14.834 [2024-07-15 20:21:07.202028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.834 [2024-07-15 20:21:07.202086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:14.834 [2024-07-15 20:21:07.202100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.093 #57 NEW cov: 12187 ft: 13679 corp: 14/363b lim: 50 exec/s: 0 rss: 71Mb L: 27/38 MS: 1 ChangeBinInt- 00:07:15.094 [2024-07-15 20:21:07.252217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:07:15.094 [2024-07-15 20:21:07.252244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.252280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18418596576038486015 len:65536 00:07:15.094 [2024-07-15 20:21:07.252295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.252348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:39836 00:07:15.094 [2024-07-15 20:21:07.252365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.094 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:15.094 #58 NEW cov: 12210 ft: 13737 corp: 15/401b lim: 50 exec/s: 0 rss: 71Mb L: 38/38 MS: 1 ChangeBinInt- 00:07:15.094 [2024-07-15 20:21:07.302277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:15.094 [2024-07-15 20:21:07.302304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.302370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:15.094 [2024-07-15 20:21:07.302386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.094 #59 NEW cov: 12210 ft: 13753 corp: 16/428b lim: 50 exec/s: 0 rss: 71Mb L: 27/38 MS: 1 EraseBytes- 00:07:15.094 [2024-07-15 20:21:07.342448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070136004607 len:65536 00:07:15.094 [2024-07-15 20:21:07.342475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.342522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073702998015 len:65536 00:07:15.094 [2024-07-15 20:21:07.342538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.342592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:39836 00:07:15.094 [2024-07-15 20:21:07.342607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.094 #60 NEW cov: 12210 ft: 13798 corp: 17/466b lim: 50 exec/s: 60 rss: 71Mb L: 38/38 MS: 1 ShuffleBytes- 00:07:15.094 [2024-07-15 20:21:07.382457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343145471 len:65536 00:07:15.094 [2024-07-15 20:21:07.382484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.382534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:15.094 [2024-07-15 20:21:07.382549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.094 #61 NEW cov: 12210 ft: 13817 corp: 18/493b lim: 50 exec/s: 61 rss: 71Mb L: 27/38 MS: 1 ChangeBit- 00:07:15.094 [2024-07-15 20:21:07.422568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070135952127 len:65536 00:07:15.094 [2024-07-15 20:21:07.422598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.422636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18399174802645450751 len:65536 00:07:15.094 [2024-07-15 20:21:07.422651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.094 #62 NEW cov: 12210 ft: 13827 corp: 19/521b lim: 50 exec/s: 62 rss: 71Mb L: 28/38 MS: 1 InsertByte- 00:07:15.094 [2024-07-15 20:21:07.462725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:15.094 [2024-07-15 20:21:07.462752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.094 [2024-07-15 20:21:07.462787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743403694653439 len:25701 00:07:15.094 [2024-07-15 20:21:07.462803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.353 #63 NEW cov: 12210 ft: 13931 corp: 20/548b lim: 50 exec/s: 63 rss: 71Mb L: 27/38 MS: 1 ChangeBinInt- 00:07:15.353 [2024-07-15 20:21:07.502660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:15.353 [2024-07-15 20:21:07.502686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.353 #64 NEW cov: 12210 ft: 14327 corp: 21/563b lim: 50 exec/s: 64 rss: 72Mb L: 15/38 MS: 1 EraseBytes- 00:07:15.353 [2024-07-15 20:21:07.552996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095938028059 len:65536 00:07:15.353 [2024-07-15 20:21:07.553023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.353 [2024-07-15 20:21:07.553090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:15.353 [2024-07-15 20:21:07.553105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.353 #65 NEW cov: 12210 ft: 14344 corp: 22/590b lim: 50 exec/s: 65 rss: 72Mb L: 27/38 MS: 1 ChangeBit- 00:07:15.353 [2024-07-15 20:21:07.603118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984116587397119 len:65536 00:07:15.353 [2024-07-15 20:21:07.603145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.353 [2024-07-15 20:21:07.603181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446633693050044415 len:39836 00:07:15.353 [2024-07-15 20:21:07.603195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.353 #66 NEW cov: 12210 ft: 14349 corp: 23/617b lim: 50 exec/s: 66 rss: 72Mb L: 27/38 MS: 1 ChangeBit- 00:07:15.353 [2024-07-15 20:21:07.643258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1945555039729418034 len:65536 00:07:15.353 [2024-07-15 20:21:07.643284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.353 [2024-07-15 20:21:07.643335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:15.353 [2024-07-15 20:21:07.643349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.353 #67 NEW cov: 12210 ft: 14368 corp: 24/645b lim: 50 exec/s: 67 rss: 72Mb L: 28/38 MS: 1 CrossOver- 00:07:15.353 [2024-07-15 20:21:07.693405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446634118973227007 len:65536 00:07:15.353 [2024-07-15 20:21:07.693435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.353 [2024-07-15 20:21:07.693491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644196044799 len:39936 00:07:15.353 [2024-07-15 20:21:07.693507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.353 #68 NEW cov: 12210 ft: 14409 corp: 25/672b lim: 50 exec/s: 68 rss: 72Mb L: 27/38 MS: 1 ShuffleBytes- 00:07:15.611 [2024-07-15 20:21:07.743521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240421716389789695 len:65536 00:07:15.611 [2024-07-15 20:21:07.743549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.611 [2024-07-15 20:21:07.743602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:15.611 [2024-07-15 20:21:07.743617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.611 #69 NEW cov: 12210 ft: 14420 corp: 26/699b lim: 50 exec/s: 69 rss: 72Mb L: 27/38 MS: 1 ChangeBit- 00:07:15.611 [2024-07-15 20:21:07.783732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984116587397119 len:65536 00:07:15.611 [2024-07-15 20:21:07.783758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.611 [2024-07-15 20:21:07.783801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446633693050044415 len:3841 00:07:15.611 [2024-07-15 20:21:07.783816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.611 [2024-07-15 20:21:07.783869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11212837167950764955 len:65436 00:07:15.611 [2024-07-15 20:21:07.783885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.611 #70 NEW cov: 12210 ft: 14434 corp: 27/730b lim: 50 exec/s: 70 rss: 72Mb L: 31/38 MS: 1 CMP- DE: "\017\000\000\000"- 00:07:15.611 [2024-07-15 20:21:07.833857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446463681786019839 len:65536 00:07:15.611 [2024-07-15 20:21:07.833883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.833923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073702998015 len:65536 00:07:15.612 [2024-07-15 20:21:07.833938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.833991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:39836 00:07:15.612 [2024-07-15 20:21:07.834022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.612 #71 NEW cov: 12210 ft: 14447 corp: 28/768b lim: 50 exec/s: 71 rss: 72Mb L: 38/38 MS: 1 ChangeBinInt- 00:07:15.612 [2024-07-15 20:21:07.884139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:15.612 [2024-07-15 20:21:07.884165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.884217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15553137160186484695 len:55256 00:07:15.612 [2024-07-15 20:21:07.884234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.884284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15564440311518713815 len:65536 00:07:15.612 [2024-07-15 20:21:07.884299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.884346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11212727221076335509 len:65536 00:07:15.612 [2024-07-15 20:21:07.884362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.612 #72 NEW cov: 12210 ft: 14693 corp: 29/810b lim: 50 exec/s: 72 rss: 72Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:07:15.612 [2024-07-15 20:21:07.934127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446463681786019839 len:65536 00:07:15.612 [2024-07-15 20:21:07.934153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.934197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281474970095360 len:65536 00:07:15.612 [2024-07-15 20:21:07.934212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.612 [2024-07-15 20:21:07.934263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:39836 00:07:15.612 [2024-07-15 20:21:07.934279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.612 #73 NEW cov: 12210 ft: 14712 corp: 30/848b lim: 50 exec/s: 73 rss: 72Mb L: 38/42 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:15.612 [2024-07-15 20:21:07.984068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65535 00:07:15.612 [2024-07-15 20:21:07.984095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 #74 NEW cov: 12210 ft: 14765 corp: 31/863b lim: 50 exec/s: 74 rss: 72Mb L: 15/42 MS: 1 ChangeBit- 00:07:15.871 [2024-07-15 20:21:08.034317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095938028059 len:65536 00:07:15.871 [2024-07-15 20:21:08.034343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 [2024-07-15 20:21:08.034378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:15.871 [2024-07-15 20:21:08.034395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.871 #75 NEW cov: 12210 ft: 14782 corp: 32/890b lim: 50 exec/s: 75 rss: 72Mb L: 27/42 MS: 1 ChangeBit- 00:07:15.871 [2024-07-15 20:21:08.074298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10634005404865237907 len:37780 00:07:15.871 [2024-07-15 20:21:08.074324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 #77 NEW cov: 12210 ft: 14870 corp: 33/909b lim: 50 exec/s: 77 rss: 72Mb L: 19/42 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:15.871 [2024-07-15 20:21:08.114537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18388789916026732586 len:1 00:07:15.871 [2024-07-15 20:21:08.114565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 [2024-07-15 20:21:08.114602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:15.871 [2024-07-15 20:21:08.114619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.871 #81 NEW cov: 12210 ft: 14874 corp: 34/938b lim: 50 exec/s: 81 rss: 73Mb L: 29/42 MS: 4 CrossOver-CopyPart-PersAutoDict-CrossOver- DE: "\017\000\000\000"- 00:07:15.871 [2024-07-15 20:21:08.164810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343145471 len:65536 00:07:15.871 [2024-07-15 20:21:08.164837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 [2024-07-15 20:21:08.164872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743644212822015 len:39836 00:07:15.871 [2024-07-15 20:21:08.164888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.871 [2024-07-15 20:21:08.164940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18418595659526176767 len:65536 00:07:15.871 [2024-07-15 20:21:08.164956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.871 #82 NEW cov: 12210 ft: 14892 corp: 35/972b lim: 50 exec/s: 82 rss: 73Mb L: 34/42 MS: 1 CrossOver- 00:07:15.871 [2024-07-15 20:21:08.204810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11240984666343211007 len:65536 00:07:15.871 [2024-07-15 20:21:08.204838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 [2024-07-15 20:21:08.204888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446633693050044415 len:39836 00:07:15.871 [2024-07-15 20:21:08.204904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.871 #83 NEW cov: 12210 ft: 14930 corp: 36/1000b lim: 50 exec/s: 83 rss: 73Mb L: 28/42 MS: 1 InsertByte- 00:07:15.871 [2024-07-15 20:21:08.244910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7161677109553816419 len:25444 00:07:15.871 [2024-07-15 20:21:08.244937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.871 [2024-07-15 20:21:08.244974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7161677110969590627 len:25444 00:07:15.871 [2024-07-15 20:21:08.244990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.130 #85 NEW cov: 12210 ft: 14937 corp: 37/1027b lim: 50 exec/s: 85 rss: 73Mb L: 27/42 MS: 2 PersAutoDict-InsertRepeatedBytes- DE: "\017\000\000\000"- 00:07:16.130 [2024-07-15 20:21:08.285138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446463681786019839 len:65536 00:07:16.130 [2024-07-15 20:21:08.285167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.130 [2024-07-15 20:21:08.285215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281474970095360 len:65536 00:07:16.130 [2024-07-15 20:21:08.285230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.130 [2024-07-15 20:21:08.285285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18378908608617250815 len:156 00:07:16.130 [2024-07-15 20:21:08.285301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.130 #86 NEW cov: 12210 ft: 14947 corp: 38/1065b lim: 50 exec/s: 86 rss: 73Mb L: 38/42 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:16.130 [2024-07-15 20:21:08.335261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1095938028059 len:65536 00:07:16.130 [2024-07-15 20:21:08.335288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.130 [2024-07-15 20:21:08.335329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:16.130 [2024-07-15 20:21:08.335345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.130 [2024-07-15 20:21:08.335399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:16.130 [2024-07-15 20:21:08.335415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.130 #87 NEW cov: 12210 ft: 14964 corp: 39/1102b lim: 50 exec/s: 43 rss: 73Mb L: 37/42 MS: 1 InsertRepeatedBytes- 00:07:16.130 #87 DONE cov: 12210 ft: 14964 corp: 39/1102b lim: 50 exec/s: 43 rss: 73Mb 00:07:16.130 ###### Recommended dictionary. ###### 00:07:16.130 "\017\000\000\000" # Uses: 4 00:07:16.130 ###### End of recommended dictionary. ###### 00:07:16.130 Done 87 runs in 2 second(s) 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.130 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.131 20:21:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:16.389 [2024-07-15 20:21:08.523774] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:16.389 [2024-07-15 20:21:08.523846] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327730 ] 00:07:16.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.389 [2024-07-15 20:21:08.699198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.389 [2024-07-15 20:21:08.764219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.648 [2024-07-15 20:21:08.823578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.648 [2024-07-15 20:21:08.839887] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:16.648 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.648 INFO: Seed: 478376795 00:07:16.648 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:16.648 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:16.648 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:16.648 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.648 #2 INITED exec/s: 0 rss: 65Mb 00:07:16.648 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.648 This may also happen if the target rejected all inputs we tried so far 00:07:16.648 [2024-07-15 20:21:08.905183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:16.648 [2024-07-15 20:21:08.905214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.648 [2024-07-15 20:21:08.905284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:16.648 [2024-07-15 20:21:08.905301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.906 NEW_FUNC[1/699]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:16.906 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:16.906 #7 NEW cov: 12023 ft: 12022 corp: 2/50b lim: 90 exec/s: 0 rss: 71Mb L: 49/49 MS: 5 ChangeByte-ChangeByte-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:16.906 [2024-07-15 20:21:09.236228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:16.906 [2024-07-15 20:21:09.236297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.906 [2024-07-15 20:21:09.236399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:16.906 [2024-07-15 20:21:09.236435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.906 #8 NEW cov: 12153 ft: 12848 corp: 3/99b lim: 90 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 ChangeByte- 00:07:17.165 [2024-07-15 20:21:09.296026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.165 [2024-07-15 20:21:09.296053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.165 [2024-07-15 20:21:09.296094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.165 [2024-07-15 20:21:09.296109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.165 #9 NEW cov: 12159 ft: 13077 corp: 4/148b lim: 90 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 CopyPart- 00:07:17.165 [2024-07-15 20:21:09.336172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.165 [2024-07-15 20:21:09.336198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.165 [2024-07-15 20:21:09.336267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.165 [2024-07-15 20:21:09.336283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.165 #10 NEW cov: 12244 ft: 13352 corp: 5/197b lim: 90 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 ShuffleBytes- 00:07:17.165 [2024-07-15 20:21:09.376283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.165 [2024-07-15 20:21:09.376309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.165 [2024-07-15 20:21:09.376360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.165 [2024-07-15 20:21:09.376376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.165 #11 NEW cov: 12244 ft: 13504 corp: 6/246b lim: 90 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 ChangeBit- 00:07:17.165 [2024-07-15 20:21:09.426422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.165 [2024-07-15 20:21:09.426452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.165 [2024-07-15 20:21:09.426509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.165 [2024-07-15 20:21:09.426524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.165 #12 NEW cov: 12244 ft: 13606 corp: 7/296b lim: 90 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 InsertByte- 00:07:17.165 [2024-07-15 20:21:09.476424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.165 [2024-07-15 20:21:09.476455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.165 #13 NEW cov: 12244 ft: 14455 corp: 8/323b lim: 90 exec/s: 0 rss: 71Mb L: 27/50 MS: 1 EraseBytes- 00:07:17.165 [2024-07-15 20:21:09.516620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.165 [2024-07-15 20:21:09.516646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.165 [2024-07-15 20:21:09.516683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.165 [2024-07-15 20:21:09.516698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.424 #14 NEW cov: 12244 ft: 14489 corp: 9/373b lim: 90 exec/s: 0 rss: 71Mb L: 50/50 MS: 1 ChangeBit- 00:07:17.424 [2024-07-15 20:21:09.566648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.424 [2024-07-15 20:21:09.566674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.424 #15 NEW cov: 12244 ft: 14551 corp: 10/400b lim: 90 exec/s: 0 rss: 72Mb L: 27/50 MS: 1 ChangeBit- 00:07:17.424 [2024-07-15 20:21:09.616927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.424 [2024-07-15 20:21:09.616953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.424 [2024-07-15 20:21:09.617005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.424 [2024-07-15 20:21:09.617021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.424 #16 NEW cov: 12244 ft: 14661 corp: 11/449b lim: 90 exec/s: 0 rss: 72Mb L: 49/50 MS: 1 CMP- DE: "\036\000"- 00:07:17.424 [2024-07-15 20:21:09.667084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.424 [2024-07-15 20:21:09.667111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.424 [2024-07-15 20:21:09.667148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.424 [2024-07-15 20:21:09.667162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.424 #17 NEW cov: 12244 ft: 14676 corp: 12/499b lim: 90 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 InsertByte- 00:07:17.424 [2024-07-15 20:21:09.707203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.424 [2024-07-15 20:21:09.707231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.424 [2024-07-15 20:21:09.707296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.424 [2024-07-15 20:21:09.707312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.424 #18 NEW cov: 12244 ft: 14700 corp: 13/548b lim: 90 exec/s: 0 rss: 72Mb L: 49/50 MS: 1 CopyPart- 00:07:17.425 [2024-07-15 20:21:09.747577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.425 [2024-07-15 20:21:09.747604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.425 [2024-07-15 20:21:09.747651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.425 [2024-07-15 20:21:09.747665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.425 [2024-07-15 20:21:09.747718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.425 [2024-07-15 20:21:09.747733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.425 [2024-07-15 20:21:09.747786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.425 [2024-07-15 20:21:09.747800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.425 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:17.425 #19 NEW cov: 12267 ft: 15130 corp: 14/620b lim: 90 exec/s: 0 rss: 72Mb L: 72/72 MS: 1 CopyPart- 00:07:17.425 [2024-07-15 20:21:09.797461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.425 [2024-07-15 20:21:09.797488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.425 [2024-07-15 20:21:09.797554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.425 [2024-07-15 20:21:09.797569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.684 #20 NEW cov: 12267 ft: 15176 corp: 15/671b lim: 90 exec/s: 0 rss: 72Mb L: 51/72 MS: 1 InsertByte- 00:07:17.684 [2024-07-15 20:21:09.847546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.684 [2024-07-15 20:21:09.847574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:09.847632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.684 [2024-07-15 20:21:09.847648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.684 #21 NEW cov: 12267 ft: 15223 corp: 16/720b lim: 90 exec/s: 0 rss: 72Mb L: 49/72 MS: 1 CopyPart- 00:07:17.684 [2024-07-15 20:21:09.887966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.684 [2024-07-15 20:21:09.887992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:09.888039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.684 [2024-07-15 20:21:09.888057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:09.888109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:17.684 [2024-07-15 20:21:09.888125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:09.888176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:17.684 [2024-07-15 20:21:09.888190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:17.684 #22 NEW cov: 12267 ft: 15247 corp: 17/805b lim: 90 exec/s: 22 rss: 72Mb L: 85/85 MS: 1 CopyPart- 00:07:17.684 [2024-07-15 20:21:09.937868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.684 [2024-07-15 20:21:09.937894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:09.937950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.684 [2024-07-15 20:21:09.937965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.684 #23 NEW cov: 12267 ft: 15301 corp: 18/852b lim: 90 exec/s: 23 rss: 72Mb L: 47/85 MS: 1 EraseBytes- 00:07:17.684 [2024-07-15 20:21:09.977929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.684 [2024-07-15 20:21:09.977955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:09.978008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.684 [2024-07-15 20:21:09.978023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.684 #24 NEW cov: 12267 ft: 15322 corp: 19/888b lim: 90 exec/s: 24 rss: 72Mb L: 36/85 MS: 1 CrossOver- 00:07:17.684 [2024-07-15 20:21:10.028158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.684 [2024-07-15 20:21:10.028187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.684 [2024-07-15 20:21:10.028240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.684 [2024-07-15 20:21:10.028255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.684 #25 NEW cov: 12267 ft: 15331 corp: 20/938b lim: 90 exec/s: 25 rss: 73Mb L: 50/85 MS: 1 ShuffleBytes- 00:07:17.943 [2024-07-15 20:21:10.068176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.943 [2024-07-15 20:21:10.068204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.943 [2024-07-15 20:21:10.068255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.943 [2024-07-15 20:21:10.068271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.943 #26 NEW cov: 12267 ft: 15334 corp: 21/975b lim: 90 exec/s: 26 rss: 73Mb L: 37/85 MS: 1 EraseBytes- 00:07:17.943 [2024-07-15 20:21:10.108159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.943 [2024-07-15 20:21:10.108187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.943 #27 NEW cov: 12267 ft: 15406 corp: 22/1005b lim: 90 exec/s: 27 rss: 73Mb L: 30/85 MS: 1 EraseBytes- 00:07:17.943 [2024-07-15 20:21:10.158318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.943 [2024-07-15 20:21:10.158348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.943 #28 NEW cov: 12267 ft: 15444 corp: 23/1032b lim: 90 exec/s: 28 rss: 73Mb L: 27/85 MS: 1 ChangeBit- 00:07:17.943 [2024-07-15 20:21:10.198571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.943 [2024-07-15 20:21:10.198597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.943 [2024-07-15 20:21:10.198636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.943 [2024-07-15 20:21:10.198652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.943 #29 NEW cov: 12267 ft: 15489 corp: 24/1083b lim: 90 exec/s: 29 rss: 73Mb L: 51/85 MS: 1 CopyPart- 00:07:17.943 [2024-07-15 20:21:10.248719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.943 [2024-07-15 20:21:10.248745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.943 [2024-07-15 20:21:10.248786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.943 [2024-07-15 20:21:10.248801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.943 #30 NEW cov: 12267 ft: 15499 corp: 25/1135b lim: 90 exec/s: 30 rss: 73Mb L: 52/85 MS: 1 CrossOver- 00:07:17.943 [2024-07-15 20:21:10.298846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.943 [2024-07-15 20:21:10.298872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.943 [2024-07-15 20:21:10.298926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.943 [2024-07-15 20:21:10.298942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.202 #31 NEW cov: 12267 ft: 15514 corp: 26/1178b lim: 90 exec/s: 31 rss: 73Mb L: 43/85 MS: 1 EraseBytes- 00:07:18.202 [2024-07-15 20:21:10.349029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.202 [2024-07-15 20:21:10.349055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.202 [2024-07-15 20:21:10.349093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.202 [2024-07-15 20:21:10.349109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.202 #32 NEW cov: 12267 ft: 15520 corp: 27/1216b lim: 90 exec/s: 32 rss: 73Mb L: 38/85 MS: 1 InsertByte- 00:07:18.202 [2024-07-15 20:21:10.389385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.202 [2024-07-15 20:21:10.389411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.202 [2024-07-15 20:21:10.389464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.202 [2024-07-15 20:21:10.389480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.202 [2024-07-15 20:21:10.389534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.202 [2024-07-15 20:21:10.389550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.202 [2024-07-15 20:21:10.389603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.202 [2024-07-15 20:21:10.389624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.202 #33 NEW cov: 12267 ft: 15529 corp: 28/1303b lim: 90 exec/s: 33 rss: 73Mb L: 87/87 MS: 1 PersAutoDict- DE: "\036\000"- 00:07:18.202 [2024-07-15 20:21:10.439086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.202 [2024-07-15 20:21:10.439112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.202 #34 NEW cov: 12267 ft: 15550 corp: 29/1330b lim: 90 exec/s: 34 rss: 73Mb L: 27/87 MS: 1 CrossOver- 00:07:18.202 [2024-07-15 20:21:10.479223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.202 [2024-07-15 20:21:10.479249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.202 #35 NEW cov: 12267 ft: 15576 corp: 30/1364b lim: 90 exec/s: 35 rss: 73Mb L: 34/87 MS: 1 EraseBytes- 00:07:18.202 [2024-07-15 20:21:10.519431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.202 [2024-07-15 20:21:10.519460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.202 [2024-07-15 20:21:10.519503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.202 [2024-07-15 20:21:10.519517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.202 #36 NEW cov: 12267 ft: 15583 corp: 31/1400b lim: 90 exec/s: 36 rss: 73Mb L: 36/87 MS: 1 EraseBytes- 00:07:18.202 [2024-07-15 20:21:10.569603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.202 [2024-07-15 20:21:10.569629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.202 [2024-07-15 20:21:10.569667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.202 [2024-07-15 20:21:10.569683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 #37 NEW cov: 12267 ft: 15587 corp: 32/1449b lim: 90 exec/s: 37 rss: 73Mb L: 49/87 MS: 1 PersAutoDict- DE: "\036\000"- 00:07:18.461 [2024-07-15 20:21:10.610015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.461 [2024-07-15 20:21:10.610041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.610106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.461 [2024-07-15 20:21:10.610122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.610177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.461 [2024-07-15 20:21:10.610193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.610249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.461 [2024-07-15 20:21:10.610264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.461 #38 NEW cov: 12267 ft: 15604 corp: 33/1534b lim: 90 exec/s: 38 rss: 73Mb L: 85/87 MS: 1 CopyPart- 00:07:18.461 [2024-07-15 20:21:10.650100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.461 [2024-07-15 20:21:10.650127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.650184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.461 [2024-07-15 20:21:10.650199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.650252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.461 [2024-07-15 20:21:10.650268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.650321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.461 [2024-07-15 20:21:10.650337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.461 #39 NEW cov: 12267 ft: 15621 corp: 34/1621b lim: 90 exec/s: 39 rss: 74Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:07:18.461 [2024-07-15 20:21:10.699971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.461 [2024-07-15 20:21:10.699997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.700035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.461 [2024-07-15 20:21:10.700050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 #40 NEW cov: 12267 ft: 15659 corp: 35/1670b lim: 90 exec/s: 40 rss: 74Mb L: 49/87 MS: 1 ShuffleBytes- 00:07:18.461 [2024-07-15 20:21:10.740034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.461 [2024-07-15 20:21:10.740060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.740099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.461 [2024-07-15 20:21:10.740114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 #41 NEW cov: 12267 ft: 15672 corp: 36/1720b lim: 90 exec/s: 41 rss: 74Mb L: 50/87 MS: 1 InsertByte- 00:07:18.461 [2024-07-15 20:21:10.790239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.461 [2024-07-15 20:21:10.790265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.790302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.461 [2024-07-15 20:21:10.790316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 #42 NEW cov: 12267 ft: 15675 corp: 37/1769b lim: 90 exec/s: 42 rss: 74Mb L: 49/87 MS: 1 CMP- DE: "\036\000"- 00:07:18.461 [2024-07-15 20:21:10.830607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.461 [2024-07-15 20:21:10.830633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.830678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.461 [2024-07-15 20:21:10.830693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.830749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.461 [2024-07-15 20:21:10.830763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.461 [2024-07-15 20:21:10.830818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.461 [2024-07-15 20:21:10.830836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.719 #43 NEW cov: 12267 ft: 15680 corp: 38/1852b lim: 90 exec/s: 43 rss: 74Mb L: 83/87 MS: 1 InsertRepeatedBytes- 00:07:18.719 [2024-07-15 20:21:10.870289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.719 [2024-07-15 20:21:10.870314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.719 #44 NEW cov: 12267 ft: 15696 corp: 39/1881b lim: 90 exec/s: 22 rss: 74Mb L: 29/87 MS: 1 CMP- DE: "\035\000"- 00:07:18.719 #44 DONE cov: 12267 ft: 15696 corp: 39/1881b lim: 90 exec/s: 22 rss: 74Mb 00:07:18.719 ###### Recommended dictionary. ###### 00:07:18.719 "\036\000" # Uses: 2 00:07:18.719 "\035\000" # Uses: 0 00:07:18.719 ###### End of recommended dictionary. ###### 00:07:18.719 Done 44 runs in 2 second(s) 00:07:18.719 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.719 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.719 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.720 20:21:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:18.720 [2024-07-15 20:21:11.059477] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:18.720 [2024-07-15 20:21:11.059547] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328089 ] 00:07:18.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.978 [2024-07-15 20:21:11.236359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.978 [2024-07-15 20:21:11.302863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.237 [2024-07-15 20:21:11.362366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.237 [2024-07-15 20:21:11.378664] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:19.237 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.237 INFO: Seed: 3018376799 00:07:19.237 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:19.237 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:19.237 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.237 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.237 #2 INITED exec/s: 0 rss: 64Mb 00:07:19.237 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.237 This may also happen if the target rejected all inputs we tried so far 00:07:19.237 [2024-07-15 20:21:11.445213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.237 [2024-07-15 20:21:11.445256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.237 [2024-07-15 20:21:11.445394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.237 [2024-07-15 20:21:11.445416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.237 [2024-07-15 20:21:11.445544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.237 [2024-07-15 20:21:11.445569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.496 NEW_FUNC[1/699]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:19.496 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.496 #8 NEW cov: 12016 ft: 12016 corp: 2/38b lim: 50 exec/s: 0 rss: 70Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:19.496 [2024-07-15 20:21:11.786341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.496 [2024-07-15 20:21:11.786396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.496 [2024-07-15 20:21:11.786557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.496 [2024-07-15 20:21:11.786588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.496 [2024-07-15 20:21:11.786738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.496 [2024-07-15 20:21:11.786769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.496 [2024-07-15 20:21:11.786924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:19.496 [2024-07-15 20:21:11.786958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.496 #9 NEW cov: 12129 ft: 12934 corp: 3/82b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:19.496 [2024-07-15 20:21:11.846312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.496 [2024-07-15 20:21:11.846346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.496 [2024-07-15 20:21:11.846453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.496 [2024-07-15 20:21:11.846481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.496 [2024-07-15 20:21:11.846622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.496 [2024-07-15 20:21:11.846648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.496 [2024-07-15 20:21:11.846781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:19.496 [2024-07-15 20:21:11.846809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.755 #10 NEW cov: 12135 ft: 13146 corp: 4/126b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ChangeBinInt- 00:07:19.755 [2024-07-15 20:21:11.906657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.755 [2024-07-15 20:21:11.906686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:11.906818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.755 [2024-07-15 20:21:11.906842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:11.906966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.755 [2024-07-15 20:21:11.906987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:11.907116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:19.755 [2024-07-15 20:21:11.907141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.755 #16 NEW cov: 12220 ft: 13340 corp: 5/170b lim: 50 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ChangeBinInt- 00:07:19.755 [2024-07-15 20:21:11.956409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.755 [2024-07-15 20:21:11.956446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:11.956549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.755 [2024-07-15 20:21:11.956574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:11.956705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.755 [2024-07-15 20:21:11.956730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.755 #17 NEW cov: 12220 ft: 13521 corp: 6/207b lim: 50 exec/s: 0 rss: 71Mb L: 37/44 MS: 1 ChangeBinInt- 00:07:19.755 [2024-07-15 20:21:12.016913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.755 [2024-07-15 20:21:12.016945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:12.017082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.755 [2024-07-15 20:21:12.017103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:12.017239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.755 [2024-07-15 20:21:12.017260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:12.017390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:19.755 [2024-07-15 20:21:12.017410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.755 #18 NEW cov: 12220 ft: 13626 corp: 7/252b lim: 50 exec/s: 0 rss: 71Mb L: 45/45 MS: 1 CopyPart- 00:07:19.755 [2024-07-15 20:21:12.076795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.755 [2024-07-15 20:21:12.076821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:12.076955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.755 [2024-07-15 20:21:12.076981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.755 [2024-07-15 20:21:12.077111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.756 [2024-07-15 20:21:12.077128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.756 #19 NEW cov: 12220 ft: 13685 corp: 8/289b lim: 50 exec/s: 0 rss: 71Mb L: 37/45 MS: 1 ChangeBinInt- 00:07:19.756 [2024-07-15 20:21:12.127210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:19.756 [2024-07-15 20:21:12.127241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.756 [2024-07-15 20:21:12.127343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:19.756 [2024-07-15 20:21:12.127369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.756 [2024-07-15 20:21:12.127507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:19.756 [2024-07-15 20:21:12.127530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.756 [2024-07-15 20:21:12.127663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:19.756 [2024-07-15 20:21:12.127687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.014 #20 NEW cov: 12220 ft: 13774 corp: 9/333b lim: 50 exec/s: 0 rss: 71Mb L: 44/45 MS: 1 ChangeBinInt- 00:07:20.014 [2024-07-15 20:21:12.187391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.015 [2024-07-15 20:21:12.187423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.187571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.015 [2024-07-15 20:21:12.187600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.187736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.015 [2024-07-15 20:21:12.187765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.187900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.015 [2024-07-15 20:21:12.187929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.015 #21 NEW cov: 12220 ft: 13793 corp: 10/377b lim: 50 exec/s: 0 rss: 71Mb L: 44/45 MS: 1 ChangeBit- 00:07:20.015 [2024-07-15 20:21:12.237289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.015 [2024-07-15 20:21:12.237320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.237430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.015 [2024-07-15 20:21:12.237454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.237590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.015 [2024-07-15 20:21:12.237612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.015 #22 NEW cov: 12220 ft: 13843 corp: 11/416b lim: 50 exec/s: 0 rss: 71Mb L: 39/45 MS: 1 EraseBytes- 00:07:20.015 [2024-07-15 20:21:12.287466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.015 [2024-07-15 20:21:12.287498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.287629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.015 [2024-07-15 20:21:12.287648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.287782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.015 [2024-07-15 20:21:12.287802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.015 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:20.015 #23 NEW cov: 12243 ft: 13876 corp: 12/453b lim: 50 exec/s: 0 rss: 71Mb L: 37/45 MS: 1 ChangeByte- 00:07:20.015 [2024-07-15 20:21:12.347666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.015 [2024-07-15 20:21:12.347698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.347821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.015 [2024-07-15 20:21:12.347845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.015 [2024-07-15 20:21:12.347984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.015 [2024-07-15 20:21:12.348012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.015 #24 NEW cov: 12243 ft: 13919 corp: 13/486b lim: 50 exec/s: 0 rss: 71Mb L: 33/45 MS: 1 EraseBytes- 00:07:20.274 [2024-07-15 20:21:12.398148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.274 [2024-07-15 20:21:12.398178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.398254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.274 [2024-07-15 20:21:12.398278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.398411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.274 [2024-07-15 20:21:12.398432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.398558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.274 [2024-07-15 20:21:12.398583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.274 #25 NEW cov: 12243 ft: 13952 corp: 14/533b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 CrossOver- 00:07:20.274 [2024-07-15 20:21:12.448186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.274 [2024-07-15 20:21:12.448218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.448310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.274 [2024-07-15 20:21:12.448332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.448468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.274 [2024-07-15 20:21:12.448503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.448638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.274 [2024-07-15 20:21:12.448657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.274 #26 NEW cov: 12243 ft: 13958 corp: 15/581b lim: 50 exec/s: 26 rss: 71Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:07:20.274 [2024-07-15 20:21:12.498369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.274 [2024-07-15 20:21:12.498402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.498502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.274 [2024-07-15 20:21:12.498529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.498663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.274 [2024-07-15 20:21:12.498690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.498823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.274 [2024-07-15 20:21:12.498849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.274 #27 NEW cov: 12243 ft: 13975 corp: 16/624b lim: 50 exec/s: 27 rss: 71Mb L: 43/48 MS: 1 CrossOver- 00:07:20.274 [2024-07-15 20:21:12.558504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.274 [2024-07-15 20:21:12.558533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.558614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.274 [2024-07-15 20:21:12.558636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.558756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.274 [2024-07-15 20:21:12.558781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.274 [2024-07-15 20:21:12.558915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.274 [2024-07-15 20:21:12.558936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.275 #33 NEW cov: 12243 ft: 14058 corp: 17/672b lim: 50 exec/s: 33 rss: 71Mb L: 48/48 MS: 1 ChangeBinInt- 00:07:20.275 [2024-07-15 20:21:12.618171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.275 [2024-07-15 20:21:12.618207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.275 [2024-07-15 20:21:12.618313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.275 [2024-07-15 20:21:12.618335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.275 #34 NEW cov: 12243 ft: 14380 corp: 18/694b lim: 50 exec/s: 34 rss: 72Mb L: 22/48 MS: 1 EraseBytes- 00:07:20.534 [2024-07-15 20:21:12.678703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.534 [2024-07-15 20:21:12.678743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.678873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.534 [2024-07-15 20:21:12.678894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.679027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.534 [2024-07-15 20:21:12.679055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.534 #35 NEW cov: 12243 ft: 14451 corp: 19/727b lim: 50 exec/s: 35 rss: 72Mb L: 33/48 MS: 1 ShuffleBytes- 00:07:20.534 [2024-07-15 20:21:12.739157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.534 [2024-07-15 20:21:12.739195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.739331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.534 [2024-07-15 20:21:12.739359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.739494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.534 [2024-07-15 20:21:12.739522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.739651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.534 [2024-07-15 20:21:12.739677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.534 #36 NEW cov: 12243 ft: 14494 corp: 20/775b lim: 50 exec/s: 36 rss: 72Mb L: 48/48 MS: 1 CopyPart- 00:07:20.534 [2024-07-15 20:21:12.799326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.534 [2024-07-15 20:21:12.799361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.799500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.534 [2024-07-15 20:21:12.799525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.799653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.534 [2024-07-15 20:21:12.799680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.799807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.534 [2024-07-15 20:21:12.799827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.534 #37 NEW cov: 12243 ft: 14521 corp: 21/818b lim: 50 exec/s: 37 rss: 72Mb L: 43/48 MS: 1 ChangeByte- 00:07:20.534 [2024-07-15 20:21:12.849272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.534 [2024-07-15 20:21:12.849306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.849430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.534 [2024-07-15 20:21:12.849458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.849594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.534 [2024-07-15 20:21:12.849617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.534 #38 NEW cov: 12243 ft: 14572 corp: 22/855b lim: 50 exec/s: 38 rss: 72Mb L: 37/48 MS: 1 ChangeByte- 00:07:20.534 [2024-07-15 20:21:12.899704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.534 [2024-07-15 20:21:12.899740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.899856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.534 [2024-07-15 20:21:12.899882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.900014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.534 [2024-07-15 20:21:12.900039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.534 [2024-07-15 20:21:12.900162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.534 [2024-07-15 20:21:12.900187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.793 #39 NEW cov: 12243 ft: 14589 corp: 23/903b lim: 50 exec/s: 39 rss: 72Mb L: 48/48 MS: 1 ChangeByte- 00:07:20.793 [2024-07-15 20:21:12.949826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.793 [2024-07-15 20:21:12.949862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:12.949985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.793 [2024-07-15 20:21:12.950010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:12.950136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.793 [2024-07-15 20:21:12.950160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:12.950289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.793 [2024-07-15 20:21:12.950313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.793 #40 NEW cov: 12243 ft: 14612 corp: 24/951b lim: 50 exec/s: 40 rss: 72Mb L: 48/48 MS: 1 ChangeBinInt- 00:07:20.793 [2024-07-15 20:21:13.009788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.793 [2024-07-15 20:21:13.009823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.009945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.793 [2024-07-15 20:21:13.009971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.010095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.793 [2024-07-15 20:21:13.010123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.793 #41 NEW cov: 12243 ft: 14616 corp: 25/988b lim: 50 exec/s: 41 rss: 72Mb L: 37/48 MS: 1 CopyPart- 00:07:20.793 [2024-07-15 20:21:13.060472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.793 [2024-07-15 20:21:13.060506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.060600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.793 [2024-07-15 20:21:13.060621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.060749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.793 [2024-07-15 20:21:13.060776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.060906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.793 [2024-07-15 20:21:13.060932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.061061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:20.793 [2024-07-15 20:21:13.061083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:20.793 #42 NEW cov: 12243 ft: 14671 corp: 26/1038b lim: 50 exec/s: 42 rss: 72Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:20.793 [2024-07-15 20:21:13.130377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.793 [2024-07-15 20:21:13.130412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.130532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.793 [2024-07-15 20:21:13.130558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.130688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.793 [2024-07-15 20:21:13.130714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.793 [2024-07-15 20:21:13.130853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.793 [2024-07-15 20:21:13.130880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.793 #43 NEW cov: 12243 ft: 14758 corp: 27/1086b lim: 50 exec/s: 43 rss: 72Mb L: 48/50 MS: 1 ChangeByte- 00:07:21.061 [2024-07-15 20:21:13.200538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.061 [2024-07-15 20:21:13.200571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.200678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.061 [2024-07-15 20:21:13.200701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.200821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.061 [2024-07-15 20:21:13.200845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.200978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.061 [2024-07-15 20:21:13.200999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.061 #44 NEW cov: 12243 ft: 14820 corp: 28/1131b lim: 50 exec/s: 44 rss: 72Mb L: 45/50 MS: 1 InsertByte- 00:07:21.061 [2024-07-15 20:21:13.250797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.061 [2024-07-15 20:21:13.250837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.250976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.061 [2024-07-15 20:21:13.251003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.251126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.061 [2024-07-15 20:21:13.251152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.251278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.061 [2024-07-15 20:21:13.251304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.061 #45 NEW cov: 12243 ft: 14825 corp: 29/1180b lim: 50 exec/s: 45 rss: 72Mb L: 49/50 MS: 1 InsertByte- 00:07:21.061 [2024-07-15 20:21:13.300271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.061 [2024-07-15 20:21:13.300305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.300423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.061 [2024-07-15 20:21:13.300448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.061 #46 NEW cov: 12243 ft: 14888 corp: 30/1201b lim: 50 exec/s: 46 rss: 72Mb L: 21/50 MS: 1 CrossOver- 00:07:21.061 [2024-07-15 20:21:13.351129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.061 [2024-07-15 20:21:13.351163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.351262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.061 [2024-07-15 20:21:13.351287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.351413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.061 [2024-07-15 20:21:13.351434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.351572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.061 [2024-07-15 20:21:13.351601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.061 #47 NEW cov: 12243 ft: 14898 corp: 31/1243b lim: 50 exec/s: 47 rss: 72Mb L: 42/50 MS: 1 CrossOver- 00:07:21.061 [2024-07-15 20:21:13.410979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.061 [2024-07-15 20:21:13.411012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.411131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.061 [2024-07-15 20:21:13.411157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.061 [2024-07-15 20:21:13.411289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.061 [2024-07-15 20:21:13.411313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.327 #48 NEW cov: 12243 ft: 14909 corp: 32/1282b lim: 50 exec/s: 24 rss: 73Mb L: 39/50 MS: 1 CopyPart- 00:07:21.327 #48 DONE cov: 12243 ft: 14909 corp: 32/1282b lim: 50 exec/s: 24 rss: 73Mb 00:07:21.327 Done 48 runs in 2 second(s) 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.327 20:21:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:21.327 [2024-07-15 20:21:13.612200] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:21.327 [2024-07-15 20:21:13.612269] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328618 ] 00:07:21.327 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.586 [2024-07-15 20:21:13.806469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.586 [2024-07-15 20:21:13.871823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.586 [2024-07-15 20:21:13.931205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.586 [2024-07-15 20:21:13.947470] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:21.586 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.586 INFO: Seed: 1291396536 00:07:21.845 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:21.845 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:21.845 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:21.845 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.845 #2 INITED exec/s: 0 rss: 64Mb 00:07:21.845 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.845 This may also happen if the target rejected all inputs we tried so far 00:07:21.845 [2024-07-15 20:21:14.012967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:21.845 [2024-07-15 20:21:14.013000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.845 [2024-07-15 20:21:14.013047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:21.845 [2024-07-15 20:21:14.013062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.845 [2024-07-15 20:21:14.013116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:21.845 [2024-07-15 20:21:14.013132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.105 NEW_FUNC[1/699]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:22.105 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.105 #3 NEW cov: 12042 ft: 12038 corp: 2/66b lim: 85 exec/s: 0 rss: 70Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:07:22.105 [2024-07-15 20:21:14.353961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.105 [2024-07-15 20:21:14.354000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.354068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.105 [2024-07-15 20:21:14.354085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.354137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.105 [2024-07-15 20:21:14.354154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.354211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.105 [2024-07-15 20:21:14.354227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.105 #6 NEW cov: 12155 ft: 13004 corp: 3/135b lim: 85 exec/s: 0 rss: 70Mb L: 69/69 MS: 3 CMP-ChangeBit-CrossOver- DE: "\000\000\000\000\000\000\000\000"- 00:07:22.105 [2024-07-15 20:21:14.393946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.105 [2024-07-15 20:21:14.393974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.394021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.105 [2024-07-15 20:21:14.394036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.394088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.105 [2024-07-15 20:21:14.394103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.394155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.105 [2024-07-15 20:21:14.394170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.105 #9 NEW cov: 12161 ft: 13173 corp: 4/219b lim: 85 exec/s: 0 rss: 70Mb L: 84/84 MS: 3 ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:07:22.105 [2024-07-15 20:21:14.433930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.105 [2024-07-15 20:21:14.433957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.434018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.105 [2024-07-15 20:21:14.434033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.105 [2024-07-15 20:21:14.434087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.105 [2024-07-15 20:21:14.434103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.105 #10 NEW cov: 12246 ft: 13369 corp: 5/284b lim: 85 exec/s: 0 rss: 71Mb L: 65/84 MS: 1 ChangeBinInt- 00:07:22.105 [2024-07-15 20:21:14.483779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.105 [2024-07-15 20:21:14.483806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 #12 NEW cov: 12246 ft: 14354 corp: 6/307b lim: 85 exec/s: 0 rss: 71Mb L: 23/84 MS: 2 ChangeByte-CrossOver- 00:07:22.365 [2024-07-15 20:21:14.523883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.365 [2024-07-15 20:21:14.523911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 #14 NEW cov: 12246 ft: 14462 corp: 7/324b lim: 85 exec/s: 0 rss: 71Mb L: 17/84 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:22.365 [2024-07-15 20:21:14.564293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.365 [2024-07-15 20:21:14.564320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.564361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.365 [2024-07-15 20:21:14.564376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.564429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.365 [2024-07-15 20:21:14.564448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.365 #15 NEW cov: 12246 ft: 14506 corp: 8/381b lim: 85 exec/s: 0 rss: 71Mb L: 57/84 MS: 1 InsertRepeatedBytes- 00:07:22.365 [2024-07-15 20:21:14.614124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.365 [2024-07-15 20:21:14.614151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 #16 NEW cov: 12246 ft: 14529 corp: 9/409b lim: 85 exec/s: 0 rss: 71Mb L: 28/84 MS: 1 CrossOver- 00:07:22.365 [2024-07-15 20:21:14.654797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.365 [2024-07-15 20:21:14.654823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.654876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.365 [2024-07-15 20:21:14.654889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.654941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.365 [2024-07-15 20:21:14.654958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.655011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.365 [2024-07-15 20:21:14.655026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.655084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:22.365 [2024-07-15 20:21:14.655100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:22.365 #19 NEW cov: 12246 ft: 14591 corp: 10/494b lim: 85 exec/s: 0 rss: 71Mb L: 85/85 MS: 3 ChangeBinInt-ChangeByte-CrossOver- 00:07:22.365 [2024-07-15 20:21:14.694475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.365 [2024-07-15 20:21:14.694502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.694556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.365 [2024-07-15 20:21:14.694573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.365 #20 NEW cov: 12246 ft: 14924 corp: 11/541b lim: 85 exec/s: 0 rss: 71Mb L: 47/85 MS: 1 EraseBytes- 00:07:22.365 [2024-07-15 20:21:14.744807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.365 [2024-07-15 20:21:14.744835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.744873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.365 [2024-07-15 20:21:14.744889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.365 [2024-07-15 20:21:14.744943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.366 [2024-07-15 20:21:14.744959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.625 #21 NEW cov: 12246 ft: 14970 corp: 12/606b lim: 85 exec/s: 0 rss: 71Mb L: 65/85 MS: 1 CopyPart- 00:07:22.625 [2024-07-15 20:21:14.784749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.625 [2024-07-15 20:21:14.784777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.784842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.625 [2024-07-15 20:21:14.784858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.625 #22 NEW cov: 12246 ft: 15009 corp: 13/652b lim: 85 exec/s: 0 rss: 71Mb L: 46/85 MS: 1 InsertRepeatedBytes- 00:07:22.625 [2024-07-15 20:21:14.825145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.625 [2024-07-15 20:21:14.825172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.825227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.625 [2024-07-15 20:21:14.825243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.825296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.625 [2024-07-15 20:21:14.825311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.825367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.625 [2024-07-15 20:21:14.825383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.625 #23 NEW cov: 12246 ft: 15070 corp: 14/734b lim: 85 exec/s: 0 rss: 71Mb L: 82/85 MS: 1 InsertRepeatedBytes- 00:07:22.625 [2024-07-15 20:21:14.865250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.625 [2024-07-15 20:21:14.865277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.865339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.625 [2024-07-15 20:21:14.865354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.865407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.625 [2024-07-15 20:21:14.865423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.865481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.625 [2024-07-15 20:21:14.865494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.625 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:22.625 #24 NEW cov: 12269 ft: 15141 corp: 15/818b lim: 85 exec/s: 0 rss: 71Mb L: 84/85 MS: 1 CopyPart- 00:07:22.625 [2024-07-15 20:21:14.914996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.625 [2024-07-15 20:21:14.915022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.625 #25 NEW cov: 12269 ft: 15159 corp: 16/842b lim: 85 exec/s: 0 rss: 71Mb L: 24/85 MS: 1 InsertByte- 00:07:22.625 [2024-07-15 20:21:14.965406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.625 [2024-07-15 20:21:14.965432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.965497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.625 [2024-07-15 20:21:14.965512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.625 [2024-07-15 20:21:14.965568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.625 [2024-07-15 20:21:14.965584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.625 #26 NEW cov: 12269 ft: 15199 corp: 17/896b lim: 85 exec/s: 26 rss: 71Mb L: 54/85 MS: 1 EraseBytes- 00:07:22.885 [2024-07-15 20:21:15.015564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.885 [2024-07-15 20:21:15.015591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.015627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.885 [2024-07-15 20:21:15.015642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.015697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.885 [2024-07-15 20:21:15.015713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.885 #27 NEW cov: 12269 ft: 15210 corp: 18/962b lim: 85 exec/s: 27 rss: 71Mb L: 66/85 MS: 1 InsertByte- 00:07:22.885 [2024-07-15 20:21:15.056026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.885 [2024-07-15 20:21:15.056053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.056113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.885 [2024-07-15 20:21:15.056128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.056184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.885 [2024-07-15 20:21:15.056200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.056255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.885 [2024-07-15 20:21:15.056270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.056325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:22.885 [2024-07-15 20:21:15.056339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:22.885 #28 NEW cov: 12269 ft: 15259 corp: 19/1047b lim: 85 exec/s: 28 rss: 71Mb L: 85/85 MS: 1 InsertByte- 00:07:22.885 [2024-07-15 20:21:15.096088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.885 [2024-07-15 20:21:15.096115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.096183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.885 [2024-07-15 20:21:15.096199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.096252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.885 [2024-07-15 20:21:15.096267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.096318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.885 [2024-07-15 20:21:15.096333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.096388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:22.885 [2024-07-15 20:21:15.096404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:22.885 #29 NEW cov: 12269 ft: 15279 corp: 20/1132b lim: 85 exec/s: 29 rss: 72Mb L: 85/85 MS: 1 ShuffleBytes- 00:07:22.885 [2024-07-15 20:21:15.146102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.885 [2024-07-15 20:21:15.146130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.146177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.885 [2024-07-15 20:21:15.146192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.146246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.885 [2024-07-15 20:21:15.146261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.146315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.885 [2024-07-15 20:21:15.146332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.885 #30 NEW cov: 12269 ft: 15297 corp: 21/1210b lim: 85 exec/s: 30 rss: 72Mb L: 78/85 MS: 1 EraseBytes- 00:07:22.885 [2024-07-15 20:21:15.186058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.885 [2024-07-15 20:21:15.186084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.186137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.885 [2024-07-15 20:21:15.186153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.186204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.885 [2024-07-15 20:21:15.186220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.885 #31 NEW cov: 12269 ft: 15304 corp: 22/1276b lim: 85 exec/s: 31 rss: 72Mb L: 66/85 MS: 1 InsertByte- 00:07:22.885 [2024-07-15 20:21:15.226304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.885 [2024-07-15 20:21:15.226331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.226377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.885 [2024-07-15 20:21:15.226393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.885 [2024-07-15 20:21:15.226451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.885 [2024-07-15 20:21:15.226467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.886 [2024-07-15 20:21:15.226519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.886 [2024-07-15 20:21:15.226533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.886 #32 NEW cov: 12269 ft: 15342 corp: 23/1346b lim: 85 exec/s: 32 rss: 72Mb L: 70/85 MS: 1 EraseBytes- 00:07:23.145 [2024-07-15 20:21:15.275998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.145 [2024-07-15 20:21:15.276025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.145 #33 NEW cov: 12269 ft: 15356 corp: 24/1374b lim: 85 exec/s: 33 rss: 72Mb L: 28/85 MS: 1 ChangeBit- 00:07:23.145 [2024-07-15 20:21:15.326724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.145 [2024-07-15 20:21:15.326752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.326819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.145 [2024-07-15 20:21:15.326836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.326892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.145 [2024-07-15 20:21:15.326908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.326963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.145 [2024-07-15 20:21:15.326979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.327033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:07:23.145 [2024-07-15 20:21:15.327049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:23.145 #34 NEW cov: 12269 ft: 15368 corp: 25/1459b lim: 85 exec/s: 34 rss: 72Mb L: 85/85 MS: 1 ChangeBit- 00:07:23.145 [2024-07-15 20:21:15.376343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.145 [2024-07-15 20:21:15.376371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.145 #35 NEW cov: 12269 ft: 15379 corp: 26/1482b lim: 85 exec/s: 35 rss: 72Mb L: 23/85 MS: 1 CMP- DE: "\377\377\001\000"- 00:07:23.145 [2024-07-15 20:21:15.416864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.145 [2024-07-15 20:21:15.416891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.416937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.145 [2024-07-15 20:21:15.416953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.417005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.145 [2024-07-15 20:21:15.417038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.417093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.145 [2024-07-15 20:21:15.417109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.145 #36 NEW cov: 12269 ft: 15418 corp: 27/1561b lim: 85 exec/s: 36 rss: 72Mb L: 79/85 MS: 1 InsertByte- 00:07:23.145 [2024-07-15 20:21:15.466569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.145 [2024-07-15 20:21:15.466596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.145 #37 NEW cov: 12269 ft: 15424 corp: 28/1591b lim: 85 exec/s: 37 rss: 72Mb L: 30/85 MS: 1 CopyPart- 00:07:23.145 [2024-07-15 20:21:15.517171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.145 [2024-07-15 20:21:15.517196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.517256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.145 [2024-07-15 20:21:15.517272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.517323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.145 [2024-07-15 20:21:15.517338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.145 [2024-07-15 20:21:15.517389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.145 [2024-07-15 20:21:15.517405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.410 #38 NEW cov: 12269 ft: 15436 corp: 29/1669b lim: 85 exec/s: 38 rss: 72Mb L: 78/85 MS: 1 ShuffleBytes- 00:07:23.410 [2024-07-15 20:21:15.556800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.410 [2024-07-15 20:21:15.556826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.410 #39 NEW cov: 12269 ft: 15515 corp: 30/1699b lim: 85 exec/s: 39 rss: 72Mb L: 30/85 MS: 1 ShuffleBytes- 00:07:23.410 [2024-07-15 20:21:15.607387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.410 [2024-07-15 20:21:15.607416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.607476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.410 [2024-07-15 20:21:15.607492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.607545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.410 [2024-07-15 20:21:15.607570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.607622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.410 [2024-07-15 20:21:15.607638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.410 #40 NEW cov: 12269 ft: 15526 corp: 31/1768b lim: 85 exec/s: 40 rss: 72Mb L: 69/85 MS: 1 CrossOver- 00:07:23.410 [2024-07-15 20:21:15.657123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.410 [2024-07-15 20:21:15.657150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.410 #43 NEW cov: 12269 ft: 15562 corp: 32/1796b lim: 85 exec/s: 43 rss: 72Mb L: 28/85 MS: 3 CrossOver-ShuffleBytes-CopyPart- 00:07:23.410 [2024-07-15 20:21:15.697515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.410 [2024-07-15 20:21:15.697541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.697593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.410 [2024-07-15 20:21:15.697611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.697667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.410 [2024-07-15 20:21:15.697683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.410 #44 NEW cov: 12269 ft: 15574 corp: 33/1857b lim: 85 exec/s: 44 rss: 72Mb L: 61/85 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:23.410 [2024-07-15 20:21:15.747821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.410 [2024-07-15 20:21:15.747847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.747914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.410 [2024-07-15 20:21:15.747930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.747983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.410 [2024-07-15 20:21:15.747998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.748052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.410 [2024-07-15 20:21:15.748068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.410 #45 NEW cov: 12269 ft: 15587 corp: 34/1941b lim: 85 exec/s: 45 rss: 72Mb L: 84/85 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:23.410 [2024-07-15 20:21:15.787654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.410 [2024-07-15 20:21:15.787685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.410 [2024-07-15 20:21:15.787741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.410 [2024-07-15 20:21:15.787758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.670 #46 NEW cov: 12269 ft: 15601 corp: 35/1988b lim: 85 exec/s: 46 rss: 72Mb L: 47/85 MS: 1 ChangeBinInt- 00:07:23.670 [2024-07-15 20:21:15.837896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.670 [2024-07-15 20:21:15.837924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.837977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.670 [2024-07-15 20:21:15.837993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.838047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.670 [2024-07-15 20:21:15.838062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.670 #52 NEW cov: 12269 ft: 15606 corp: 36/2042b lim: 85 exec/s: 52 rss: 73Mb L: 54/85 MS: 1 ChangeBinInt- 00:07:23.670 [2024-07-15 20:21:15.888181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.670 [2024-07-15 20:21:15.888208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.888254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.670 [2024-07-15 20:21:15.888269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.888322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.670 [2024-07-15 20:21:15.888338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.888390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.670 [2024-07-15 20:21:15.888406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.670 #53 NEW cov: 12269 ft: 15620 corp: 37/2122b lim: 85 exec/s: 53 rss: 73Mb L: 80/85 MS: 1 InsertByte- 00:07:23.670 [2024-07-15 20:21:15.938323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.670 [2024-07-15 20:21:15.938350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.938398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.670 [2024-07-15 20:21:15.938413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.938485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.670 [2024-07-15 20:21:15.938505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.938557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.670 [2024-07-15 20:21:15.938572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.670 #54 NEW cov: 12269 ft: 15634 corp: 38/2192b lim: 85 exec/s: 54 rss: 73Mb L: 70/85 MS: 1 ChangeBit- 00:07:23.670 [2024-07-15 20:21:15.988491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.670 [2024-07-15 20:21:15.988517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.988565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.670 [2024-07-15 20:21:15.988580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.988633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.670 [2024-07-15 20:21:15.988665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.670 [2024-07-15 20:21:15.988719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.670 [2024-07-15 20:21:15.988734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.670 #55 NEW cov: 12269 ft: 15643 corp: 39/2266b lim: 85 exec/s: 27 rss: 73Mb L: 74/85 MS: 1 EraseBytes- 00:07:23.670 #55 DONE cov: 12269 ft: 15643 corp: 39/2266b lim: 85 exec/s: 27 rss: 73Mb 00:07:23.670 ###### Recommended dictionary. ###### 00:07:23.670 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:23.671 "\377\377\001\000" # Uses: 1 00:07:23.671 "\000\000\000\000" # Uses: 0 00:07:23.671 ###### End of recommended dictionary. ###### 00:07:23.671 Done 55 runs in 2 second(s) 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.931 20:21:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:23.931 [2024-07-15 20:21:16.179991] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:23.931 [2024-07-15 20:21:16.180085] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329072 ] 00:07:23.931 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.191 [2024-07-15 20:21:16.363773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.191 [2024-07-15 20:21:16.430389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.191 [2024-07-15 20:21:16.490025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.191 [2024-07-15 20:21:16.506301] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:24.191 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.191 INFO: Seed: 3849380700 00:07:24.191 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:24.191 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:24.191 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.191 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.191 #2 INITED exec/s: 0 rss: 63Mb 00:07:24.191 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.191 This may also happen if the target rejected all inputs we tried so far 00:07:24.191 [2024-07-15 20:21:16.551405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.191 [2024-07-15 20:21:16.551436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.711 NEW_FUNC[1/698]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:24.711 NEW_FUNC[2/698]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.711 #9 NEW cov: 11975 ft: 11974 corp: 2/8b lim: 25 exec/s: 0 rss: 70Mb L: 7/7 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:24.711 [2024-07-15 20:21:16.882290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.711 [2024-07-15 20:21:16.882322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.711 #20 NEW cov: 12088 ft: 12423 corp: 3/14b lim: 25 exec/s: 0 rss: 70Mb L: 6/7 MS: 1 InsertRepeatedBytes- 00:07:24.711 [2024-07-15 20:21:16.922573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.711 [2024-07-15 20:21:16.922601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.711 [2024-07-15 20:21:16.922646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:24.711 [2024-07-15 20:21:16.922662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.711 [2024-07-15 20:21:16.922722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:24.711 [2024-07-15 20:21:16.922739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.711 #23 NEW cov: 12094 ft: 13180 corp: 4/29b lim: 25 exec/s: 0 rss: 70Mb L: 15/15 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:24.711 [2024-07-15 20:21:16.962436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.711 [2024-07-15 20:21:16.962468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.711 #24 NEW cov: 12179 ft: 13499 corp: 5/36b lim: 25 exec/s: 0 rss: 70Mb L: 7/15 MS: 1 CopyPart- 00:07:24.711 [2024-07-15 20:21:17.012573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.711 [2024-07-15 20:21:17.012598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.711 #25 NEW cov: 12179 ft: 13731 corp: 6/43b lim: 25 exec/s: 0 rss: 70Mb L: 7/15 MS: 1 ChangeByte- 00:07:24.711 [2024-07-15 20:21:17.062670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.711 [2024-07-15 20:21:17.062696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.711 #26 NEW cov: 12179 ft: 13812 corp: 7/51b lim: 25 exec/s: 0 rss: 70Mb L: 8/15 MS: 1 InsertByte- 00:07:24.972 [2024-07-15 20:21:17.102762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.972 [2024-07-15 20:21:17.102789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.972 #27 NEW cov: 12179 ft: 13907 corp: 8/58b lim: 25 exec/s: 0 rss: 70Mb L: 7/15 MS: 1 ChangeBit- 00:07:24.972 [2024-07-15 20:21:17.142881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.972 [2024-07-15 20:21:17.142908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.972 #28 NEW cov: 12179 ft: 13934 corp: 9/63b lim: 25 exec/s: 0 rss: 70Mb L: 5/15 MS: 1 EraseBytes- 00:07:24.972 [2024-07-15 20:21:17.193296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.972 [2024-07-15 20:21:17.193323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.972 [2024-07-15 20:21:17.193371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:24.972 [2024-07-15 20:21:17.193387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.972 [2024-07-15 20:21:17.193456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:24.972 [2024-07-15 20:21:17.193473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.972 #32 NEW cov: 12179 ft: 13966 corp: 10/80b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 4 CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:24.972 [2024-07-15 20:21:17.233206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.972 [2024-07-15 20:21:17.233234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.972 #33 NEW cov: 12179 ft: 14013 corp: 11/85b lim: 25 exec/s: 0 rss: 70Mb L: 5/17 MS: 1 EraseBytes- 00:07:24.972 [2024-07-15 20:21:17.273296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.972 [2024-07-15 20:21:17.273323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.972 #34 NEW cov: 12179 ft: 14015 corp: 12/90b lim: 25 exec/s: 0 rss: 70Mb L: 5/17 MS: 1 EraseBytes- 00:07:24.972 [2024-07-15 20:21:17.323475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:24.972 [2024-07-15 20:21:17.323502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.972 #36 NEW cov: 12179 ft: 14027 corp: 13/99b lim: 25 exec/s: 0 rss: 70Mb L: 9/17 MS: 2 CopyPart-CMP- DE: "\252\310|M\332?+\000"- 00:07:25.232 [2024-07-15 20:21:17.363545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.232 [2024-07-15 20:21:17.363574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.232 #37 NEW cov: 12179 ft: 14123 corp: 14/106b lim: 25 exec/s: 0 rss: 70Mb L: 7/17 MS: 1 ShuffleBytes- 00:07:25.232 [2024-07-15 20:21:17.403666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.232 [2024-07-15 20:21:17.403693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.232 #38 NEW cov: 12179 ft: 14202 corp: 15/114b lim: 25 exec/s: 0 rss: 70Mb L: 8/17 MS: 1 InsertByte- 00:07:25.232 [2024-07-15 20:21:17.444189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.232 [2024-07-15 20:21:17.444215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.232 [2024-07-15 20:21:17.444266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.232 [2024-07-15 20:21:17.444283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.232 [2024-07-15 20:21:17.444339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.232 [2024-07-15 20:21:17.444355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.232 [2024-07-15 20:21:17.444412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.232 [2024-07-15 20:21:17.444428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.232 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:25.232 #39 NEW cov: 12202 ft: 14716 corp: 16/135b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:25.232 [2024-07-15 20:21:17.494074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.232 [2024-07-15 20:21:17.494101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.232 [2024-07-15 20:21:17.494169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.232 [2024-07-15 20:21:17.494185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.232 #40 NEW cov: 12202 ft: 14923 corp: 17/147b lim: 25 exec/s: 0 rss: 70Mb L: 12/21 MS: 1 InsertRepeatedBytes- 00:07:25.232 [2024-07-15 20:21:17.544134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.232 [2024-07-15 20:21:17.544161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.232 #41 NEW cov: 12202 ft: 14988 corp: 18/156b lim: 25 exec/s: 41 rss: 70Mb L: 9/21 MS: 1 CrossOver- 00:07:25.232 [2024-07-15 20:21:17.594266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.232 [2024-07-15 20:21:17.594295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.492 #42 NEW cov: 12202 ft: 15001 corp: 19/162b lim: 25 exec/s: 42 rss: 70Mb L: 6/21 MS: 1 EraseBytes- 00:07:25.492 [2024-07-15 20:21:17.644338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.492 [2024-07-15 20:21:17.644366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.492 #43 NEW cov: 12202 ft: 15077 corp: 20/168b lim: 25 exec/s: 43 rss: 70Mb L: 6/21 MS: 1 ChangeByte- 00:07:25.492 [2024-07-15 20:21:17.694801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.492 [2024-07-15 20:21:17.694828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.492 [2024-07-15 20:21:17.694893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.492 [2024-07-15 20:21:17.694909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.492 [2024-07-15 20:21:17.694974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.492 [2024-07-15 20:21:17.694991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.492 #44 NEW cov: 12202 ft: 15101 corp: 21/183b lim: 25 exec/s: 44 rss: 70Mb L: 15/21 MS: 1 ChangeBinInt- 00:07:25.492 [2024-07-15 20:21:17.744644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.492 [2024-07-15 20:21:17.744671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.492 #45 NEW cov: 12202 ft: 15127 corp: 22/192b lim: 25 exec/s: 45 rss: 71Mb L: 9/21 MS: 1 CopyPart- 00:07:25.492 [2024-07-15 20:21:17.795051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.492 [2024-07-15 20:21:17.795079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.492 [2024-07-15 20:21:17.795133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.492 [2024-07-15 20:21:17.795150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.492 [2024-07-15 20:21:17.795213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.492 [2024-07-15 20:21:17.795229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.492 #46 NEW cov: 12202 ft: 15138 corp: 23/209b lim: 25 exec/s: 46 rss: 71Mb L: 17/21 MS: 1 CopyPart- 00:07:25.492 [2024-07-15 20:21:17.834903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.492 [2024-07-15 20:21:17.834930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.492 #47 NEW cov: 12202 ft: 15144 corp: 24/215b lim: 25 exec/s: 47 rss: 71Mb L: 6/21 MS: 1 ChangeBit- 00:07:25.752 [2024-07-15 20:21:17.885037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.752 [2024-07-15 20:21:17.885065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.752 #49 NEW cov: 12202 ft: 15203 corp: 25/224b lim: 25 exec/s: 49 rss: 71Mb L: 9/21 MS: 2 CrossOver-PersAutoDict- DE: "\252\310|M\332?+\000"- 00:07:25.752 [2024-07-15 20:21:17.925179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.752 [2024-07-15 20:21:17.925205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.752 #50 NEW cov: 12202 ft: 15242 corp: 26/233b lim: 25 exec/s: 50 rss: 71Mb L: 9/21 MS: 1 ChangeBinInt- 00:07:25.752 [2024-07-15 20:21:17.975290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.752 [2024-07-15 20:21:17.975317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.752 #51 NEW cov: 12202 ft: 15248 corp: 27/242b lim: 25 exec/s: 51 rss: 71Mb L: 9/21 MS: 1 ChangeBit- 00:07:25.752 [2024-07-15 20:21:18.015764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.752 [2024-07-15 20:21:18.015791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.752 [2024-07-15 20:21:18.015865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.752 [2024-07-15 20:21:18.015882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.752 [2024-07-15 20:21:18.015940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.752 [2024-07-15 20:21:18.015959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.752 [2024-07-15 20:21:18.016018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.752 [2024-07-15 20:21:18.016035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.752 #52 NEW cov: 12202 ft: 15258 corp: 28/264b lim: 25 exec/s: 52 rss: 71Mb L: 22/22 MS: 1 InsertByte- 00:07:25.752 [2024-07-15 20:21:18.065596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.752 [2024-07-15 20:21:18.065622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.752 #53 NEW cov: 12202 ft: 15302 corp: 29/270b lim: 25 exec/s: 53 rss: 71Mb L: 6/22 MS: 1 ChangeBit- 00:07:25.752 [2024-07-15 20:21:18.105690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.752 [2024-07-15 20:21:18.105716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.013 #54 NEW cov: 12202 ft: 15313 corp: 30/279b lim: 25 exec/s: 54 rss: 71Mb L: 9/22 MS: 1 ShuffleBytes- 00:07:26.013 [2024-07-15 20:21:18.156212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.013 [2024-07-15 20:21:18.156238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.156300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.013 [2024-07-15 20:21:18.156316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.156375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.013 [2024-07-15 20:21:18.156390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.156454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.013 [2024-07-15 20:21:18.156470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.013 #55 NEW cov: 12202 ft: 15331 corp: 31/300b lim: 25 exec/s: 55 rss: 71Mb L: 21/22 MS: 1 CopyPart- 00:07:26.013 [2024-07-15 20:21:18.206485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.013 [2024-07-15 20:21:18.206513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.206572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.013 [2024-07-15 20:21:18.206587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.206647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.013 [2024-07-15 20:21:18.206663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.206721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.013 [2024-07-15 20:21:18.206736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.206796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.013 [2024-07-15 20:21:18.206812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.013 #56 NEW cov: 12202 ft: 15399 corp: 32/325b lim: 25 exec/s: 56 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:26.013 [2024-07-15 20:21:18.246526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.013 [2024-07-15 20:21:18.246552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.246630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.013 [2024-07-15 20:21:18.246644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.246706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.013 [2024-07-15 20:21:18.246722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.246782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.013 [2024-07-15 20:21:18.246796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.246857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.013 [2024-07-15 20:21:18.246874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.013 #57 NEW cov: 12202 ft: 15436 corp: 33/350b lim: 25 exec/s: 57 rss: 72Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:26.013 [2024-07-15 20:21:18.296301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.013 [2024-07-15 20:21:18.296327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.013 [2024-07-15 20:21:18.296370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.013 [2024-07-15 20:21:18.296385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.014 #58 NEW cov: 12202 ft: 15439 corp: 34/362b lim: 25 exec/s: 58 rss: 72Mb L: 12/25 MS: 1 ChangeBinInt- 00:07:26.014 [2024-07-15 20:21:18.346898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.014 [2024-07-15 20:21:18.346924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.014 [2024-07-15 20:21:18.347001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.014 [2024-07-15 20:21:18.347017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.014 [2024-07-15 20:21:18.347077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.014 [2024-07-15 20:21:18.347094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.014 [2024-07-15 20:21:18.347151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.014 [2024-07-15 20:21:18.347167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.014 [2024-07-15 20:21:18.347224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.014 [2024-07-15 20:21:18.347240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.014 #59 NEW cov: 12202 ft: 15458 corp: 35/387b lim: 25 exec/s: 59 rss: 72Mb L: 25/25 MS: 1 CopyPart- 00:07:26.274 [2024-07-15 20:21:18.396704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.274 [2024-07-15 20:21:18.396734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.274 [2024-07-15 20:21:18.396806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.274 [2024-07-15 20:21:18.396821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.274 #60 NEW cov: 12202 ft: 15467 corp: 36/401b lim: 25 exec/s: 60 rss: 72Mb L: 14/25 MS: 1 PersAutoDict- DE: "\252\310|M\332?+\000"- 00:07:26.274 [2024-07-15 20:21:18.446648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.274 [2024-07-15 20:21:18.446675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.274 #61 NEW cov: 12202 ft: 15535 corp: 37/408b lim: 25 exec/s: 61 rss: 72Mb L: 7/25 MS: 1 ShuffleBytes- 00:07:26.274 [2024-07-15 20:21:18.487140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.274 [2024-07-15 20:21:18.487167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.274 [2024-07-15 20:21:18.487234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.274 [2024-07-15 20:21:18.487250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.274 [2024-07-15 20:21:18.487310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.274 [2024-07-15 20:21:18.487326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.274 [2024-07-15 20:21:18.487384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.274 [2024-07-15 20:21:18.487400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.274 #62 NEW cov: 12202 ft: 15548 corp: 38/429b lim: 25 exec/s: 62 rss: 72Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:07:26.274 [2024-07-15 20:21:18.526886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.274 [2024-07-15 20:21:18.526911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.274 #63 NEW cov: 12202 ft: 15553 corp: 39/436b lim: 25 exec/s: 31 rss: 72Mb L: 7/25 MS: 1 ShuffleBytes- 00:07:26.274 #63 DONE cov: 12202 ft: 15553 corp: 39/436b lim: 25 exec/s: 31 rss: 72Mb 00:07:26.274 ###### Recommended dictionary. ###### 00:07:26.274 "\252\310|M\332?+\000" # Uses: 2 00:07:26.274 ###### End of recommended dictionary. ###### 00:07:26.274 Done 63 runs in 2 second(s) 00:07:26.533 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.533 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.534 20:21:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:26.534 [2024-07-15 20:21:18.712811] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:26.534 [2024-07-15 20:21:18.712881] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329439 ] 00:07:26.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.534 [2024-07-15 20:21:18.887576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.794 [2024-07-15 20:21:18.955384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.794 [2024-07-15 20:21:19.014913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.794 [2024-07-15 20:21:19.031203] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:26.794 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.794 INFO: Seed: 2079427608 00:07:26.794 INFO: Loaded 1 modules (357886 inline 8-bit counters): 357886 [0x29ac48c, 0x2a03a8a), 00:07:26.794 INFO: Loaded 1 PC tables (357886 PCs): 357886 [0x2a03a90,0x2f79a70), 00:07:26.794 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:26.794 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.794 #2 INITED exec/s: 0 rss: 64Mb 00:07:26.794 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.794 This may also happen if the target rejected all inputs we tried so far 00:07:26.794 [2024-07-15 20:21:19.107134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.794 [2024-07-15 20:21:19.107177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.054 NEW_FUNC[1/699]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:27.054 NEW_FUNC[2/699]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.054 #11 NEW cov: 12047 ft: 12046 corp: 2/39b lim: 100 exec/s: 0 rss: 70Mb L: 38/38 MS: 4 CopyPart-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:27.314 [2024-07-15 20:21:19.447951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.314 [2024-07-15 20:21:19.448009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.314 #12 NEW cov: 12160 ft: 12801 corp: 3/77b lim: 100 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ChangeBinInt- 00:07:27.314 [2024-07-15 20:21:19.497996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.314 [2024-07-15 20:21:19.498022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.314 #18 NEW cov: 12166 ft: 13154 corp: 4/116b lim: 100 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 CrossOver- 00:07:27.314 [2024-07-15 20:21:19.538038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.314 [2024-07-15 20:21:19.538069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.314 #19 NEW cov: 12251 ft: 13363 corp: 5/154b lim: 100 exec/s: 0 rss: 70Mb L: 38/39 MS: 1 ChangeBinInt- 00:07:27.314 [2024-07-15 20:21:19.588709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.314 [2024-07-15 20:21:19.588741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.314 [2024-07-15 20:21:19.588814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.314 [2024-07-15 20:21:19.588835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.314 [2024-07-15 20:21:19.588959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.315 [2024-07-15 20:21:19.588983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.315 #21 NEW cov: 12251 ft: 14350 corp: 6/223b lim: 100 exec/s: 0 rss: 70Mb L: 69/69 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:27.315 [2024-07-15 20:21:19.628364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.315 [2024-07-15 20:21:19.628390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.315 #22 NEW cov: 12251 ft: 14413 corp: 7/257b lim: 100 exec/s: 0 rss: 71Mb L: 34/69 MS: 1 CrossOver- 00:07:27.315 [2024-07-15 20:21:19.669272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.315 [2024-07-15 20:21:19.669305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.315 [2024-07-15 20:21:19.669394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.315 [2024-07-15 20:21:19.669414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.315 [2024-07-15 20:21:19.669541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:953482739712 len:57055 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.315 [2024-07-15 20:21:19.669567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.315 [2024-07-15 20:21:19.669685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.315 [2024-07-15 20:21:19.669709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.574 #23 NEW cov: 12251 ft: 14783 corp: 8/337b lim: 100 exec/s: 0 rss: 71Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:07:27.574 [2024-07-15 20:21:19.718495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677110969590627 len:22884 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.574 [2024-07-15 20:21:19.718521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.574 #24 NEW cov: 12251 ft: 14806 corp: 9/375b lim: 100 exec/s: 0 rss: 71Mb L: 38/80 MS: 1 ChangeBinInt- 00:07:27.575 [2024-07-15 20:21:19.759157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.759191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.575 [2024-07-15 20:21:19.759299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.759321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.575 [2024-07-15 20:21:19.759434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.759460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.575 #25 NEW cov: 12251 ft: 14870 corp: 10/444b lim: 100 exec/s: 0 rss: 71Mb L: 69/80 MS: 1 ChangeBit- 00:07:27.575 [2024-07-15 20:21:19.798864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.798893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.575 #26 NEW cov: 12251 ft: 14896 corp: 11/478b lim: 100 exec/s: 0 rss: 71Mb L: 34/80 MS: 1 ChangeByte- 00:07:27.575 [2024-07-15 20:21:19.849003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.849032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.575 #27 NEW cov: 12251 ft: 14951 corp: 12/517b lim: 100 exec/s: 0 rss: 71Mb L: 39/80 MS: 1 ChangeBinInt- 00:07:27.575 [2024-07-15 20:21:19.899399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.899431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.575 [2024-07-15 20:21:19.899556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.899578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.575 [2024-07-15 20:21:19.899699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.899719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.575 [2024-07-15 20:21:19.899839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7161677109308646243 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.899864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.575 #28 NEW cov: 12251 ft: 14993 corp: 13/602b lim: 100 exec/s: 0 rss: 71Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:07:27.575 [2024-07-15 20:21:19.938878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:16740 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.575 [2024-07-15 20:21:19.938905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.834 NEW_FUNC[1/1]: 0x1a7f5f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:27.834 #34 NEW cov: 12274 ft: 15050 corp: 14/637b lim: 100 exec/s: 0 rss: 71Mb L: 35/85 MS: 1 InsertByte- 00:07:27.834 [2024-07-15 20:21:19.990162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:19.990197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:19.990272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:19.990292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:19.990414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4251398048237748224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:19.990437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:19.990571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7161677109302158179 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:19.990594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.834 #35 NEW cov: 12274 ft: 15066 corp: 15/723b lim: 100 exec/s: 0 rss: 71Mb L: 86/86 MS: 1 InsertByte- 00:07:27.834 [2024-07-15 20:21:20.049593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.049621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.834 #36 NEW cov: 12274 ft: 15123 corp: 16/761b lim: 100 exec/s: 36 rss: 72Mb L: 38/86 MS: 1 ChangeByte- 00:07:27.834 [2024-07-15 20:21:20.109795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.109829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.834 #37 NEW cov: 12274 ft: 15189 corp: 17/784b lim: 100 exec/s: 37 rss: 72Mb L: 23/86 MS: 1 CrossOver- 00:07:27.834 [2024-07-15 20:21:20.150248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.150279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:20.150377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.150398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:20.150522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.150548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:20.150684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7161677109308646243 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.150707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.834 #38 NEW cov: 12274 ft: 15228 corp: 18/869b lim: 100 exec/s: 38 rss: 72Mb L: 85/86 MS: 1 CopyPart- 00:07:27.834 [2024-07-15 20:21:20.190718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.190754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:20.190858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.190880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:20.190994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.191017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.834 [2024-07-15 20:21:20.191141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.834 [2024-07-15 20:21:20.191161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.093 #39 NEW cov: 12274 ft: 15258 corp: 19/963b lim: 100 exec/s: 39 rss: 72Mb L: 94/94 MS: 1 InsertRepeatedBytes- 00:07:28.093 [2024-07-15 20:21:20.240435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.240469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.093 [2024-07-15 20:21:20.240578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.240608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.093 #40 NEW cov: 12274 ft: 15560 corp: 20/1012b lim: 100 exec/s: 40 rss: 72Mb L: 49/94 MS: 1 EraseBytes- 00:07:28.093 [2024-07-15 20:21:20.290514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:13668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.290541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.093 [2024-07-15 20:21:20.290669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.290691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.093 #41 NEW cov: 12274 ft: 15574 corp: 21/1052b lim: 100 exec/s: 41 rss: 72Mb L: 40/94 MS: 1 InsertByte- 00:07:28.093 [2024-07-15 20:21:20.330263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.330294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.093 #42 NEW cov: 12274 ft: 15588 corp: 22/1091b lim: 100 exec/s: 42 rss: 72Mb L: 39/94 MS: 1 InsertByte- 00:07:28.093 [2024-07-15 20:21:20.370938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.370973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.093 [2024-07-15 20:21:20.371084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.371105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.093 [2024-07-15 20:21:20.371228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.371248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.093 #43 NEW cov: 12274 ft: 15593 corp: 23/1168b lim: 100 exec/s: 43 rss: 72Mb L: 77/94 MS: 1 InsertRepeatedBytes- 00:07:28.093 [2024-07-15 20:21:20.411106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.411140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.093 [2024-07-15 20:21:20.411230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.411254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.093 [2024-07-15 20:21:20.411376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.411398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.093 #44 NEW cov: 12274 ft: 15603 corp: 24/1237b lim: 100 exec/s: 44 rss: 72Mb L: 69/94 MS: 1 ChangeByte- 00:07:28.093 [2024-07-15 20:21:20.461062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.093 [2024-07-15 20:21:20.461095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.094 [2024-07-15 20:21:20.461206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.094 [2024-07-15 20:21:20.461229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.353 #50 NEW cov: 12274 ft: 15614 corp: 25/1287b lim: 100 exec/s: 50 rss: 72Mb L: 50/94 MS: 1 EraseBytes- 00:07:28.353 [2024-07-15 20:21:20.521465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.521501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.353 [2024-07-15 20:21:20.521633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.521658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.353 [2024-07-15 20:21:20.521773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.521798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.353 #51 NEW cov: 12274 ft: 15632 corp: 26/1348b lim: 100 exec/s: 51 rss: 72Mb L: 61/94 MS: 1 CopyPart- 00:07:28.353 [2024-07-15 20:21:20.561078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:16740 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.561104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.353 #52 NEW cov: 12274 ft: 15678 corp: 27/1381b lim: 100 exec/s: 52 rss: 72Mb L: 33/94 MS: 1 EraseBytes- 00:07:28.353 [2024-07-15 20:21:20.611282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677110969590627 len:26980 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.611311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.353 #53 NEW cov: 12274 ft: 15690 corp: 28/1419b lim: 100 exec/s: 53 rss: 72Mb L: 38/94 MS: 1 CopyPart- 00:07:28.353 [2024-07-15 20:21:20.661956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.661993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.353 [2024-07-15 20:21:20.662117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4143972351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.662143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.353 [2024-07-15 20:21:20.662264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.662286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.353 #54 NEW cov: 12274 ft: 15727 corp: 29/1488b lim: 100 exec/s: 54 rss: 72Mb L: 69/94 MS: 1 ChangeBinInt- 00:07:28.353 [2024-07-15 20:21:20.702052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1392508928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.702087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.353 [2024-07-15 20:21:20.702172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.702193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.353 [2024-07-15 20:21:20.702315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7161677110969983843 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.353 [2024-07-15 20:21:20.702337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.612 #55 NEW cov: 12274 ft: 15746 corp: 30/1561b lim: 100 exec/s: 55 rss: 72Mb L: 73/94 MS: 1 CrossOver- 00:07:28.612 [2024-07-15 20:21:20.751232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.612 [2024-07-15 20:21:20.751260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.612 #58 NEW cov: 12274 ft: 15757 corp: 31/1596b lim: 100 exec/s: 58 rss: 72Mb L: 35/94 MS: 3 InsertByte-ChangeBit-CrossOver- 00:07:28.612 [2024-07-15 20:21:20.791730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677110969590627 len:26980 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.612 [2024-07-15 20:21:20.791758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.612 #59 NEW cov: 12274 ft: 15772 corp: 32/1634b lim: 100 exec/s: 59 rss: 73Mb L: 38/94 MS: 1 ShuffleBytes- 00:07:28.612 [2024-07-15 20:21:20.841906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.612 [2024-07-15 20:21:20.841935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.612 #60 NEW cov: 12274 ft: 15794 corp: 33/1672b lim: 100 exec/s: 60 rss: 73Mb L: 38/94 MS: 1 CrossOver- 00:07:28.612 [2024-07-15 20:21:20.881919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.612 [2024-07-15 20:21:20.881946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.612 #61 NEW cov: 12274 ft: 15804 corp: 34/1706b lim: 100 exec/s: 61 rss: 73Mb L: 34/94 MS: 1 ChangeByte- 00:07:28.612 [2024-07-15 20:21:20.921927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:16740 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.612 [2024-07-15 20:21:20.921955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.612 #62 NEW cov: 12274 ft: 15846 corp: 35/1739b lim: 100 exec/s: 62 rss: 73Mb L: 33/94 MS: 1 CopyPart- 00:07:28.612 [2024-07-15 20:21:20.971863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161683708039357283 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.612 [2024-07-15 20:21:20.971889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.612 #63 NEW cov: 12274 ft: 15857 corp: 36/1777b lim: 100 exec/s: 63 rss: 73Mb L: 38/94 MS: 1 ChangeByte- 00:07:28.871 [2024-07-15 20:21:21.011921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.871 [2024-07-15 20:21:21.011947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.871 #64 NEW cov: 12274 ft: 15866 corp: 37/1800b lim: 100 exec/s: 64 rss: 73Mb L: 23/94 MS: 1 ChangeBit- 00:07:28.871 [2024-07-15 20:21:21.062997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7161677109476418403 len:16740 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.871 [2024-07-15 20:21:21.063030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.871 [2024-07-15 20:21:21.063142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7161677110969590627 len:25444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.871 [2024-07-15 20:21:21.063165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.871 [2024-07-15 20:21:21.063291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.871 [2024-07-15 20:21:21.063325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.872 #65 NEW cov: 12274 ft: 15877 corp: 38/1866b lim: 100 exec/s: 32 rss: 73Mb L: 66/94 MS: 1 CrossOver- 00:07:28.872 #65 DONE cov: 12274 ft: 15877 corp: 38/1866b lim: 100 exec/s: 32 rss: 73Mb 00:07:28.872 Done 65 runs in 2 second(s) 00:07:28.872 20:21:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:28.872 20:21:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:28.872 20:21:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.872 20:21:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:28.872 00:07:28.872 real 1m4.180s 00:07:28.872 user 1m40.482s 00:07:28.872 sys 0m7.018s 00:07:28.872 20:21:21 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.872 20:21:21 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:28.872 ************************************ 00:07:28.872 END TEST nvmf_llvm_fuzz 00:07:28.872 ************************************ 00:07:29.134 20:21:21 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:29.134 20:21:21 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:29.134 20:21:21 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:29.134 20:21:21 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:29.134 20:21:21 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.134 20:21:21 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.134 20:21:21 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:29.134 ************************************ 00:07:29.134 START TEST vfio_llvm_fuzz 00:07:29.134 ************************************ 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:29.134 * Looking for test storage... 00:07:29.134 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:29.134 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:29.135 #define SPDK_CONFIG_H 00:07:29.135 #define SPDK_CONFIG_APPS 1 00:07:29.135 #define SPDK_CONFIG_ARCH native 00:07:29.135 #undef SPDK_CONFIG_ASAN 00:07:29.135 #undef SPDK_CONFIG_AVAHI 00:07:29.135 #undef SPDK_CONFIG_CET 00:07:29.135 #define SPDK_CONFIG_COVERAGE 1 00:07:29.135 #define SPDK_CONFIG_CROSS_PREFIX 00:07:29.135 #undef SPDK_CONFIG_CRYPTO 00:07:29.135 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:29.135 #undef SPDK_CONFIG_CUSTOMOCF 00:07:29.135 #undef SPDK_CONFIG_DAOS 00:07:29.135 #define SPDK_CONFIG_DAOS_DIR 00:07:29.135 #define SPDK_CONFIG_DEBUG 1 00:07:29.135 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:29.135 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:29.135 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:29.135 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:29.135 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:29.135 #undef SPDK_CONFIG_DPDK_UADK 00:07:29.135 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:29.135 #define SPDK_CONFIG_EXAMPLES 1 00:07:29.135 #undef SPDK_CONFIG_FC 00:07:29.135 #define SPDK_CONFIG_FC_PATH 00:07:29.135 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:29.135 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:29.135 #undef SPDK_CONFIG_FUSE 00:07:29.135 #define SPDK_CONFIG_FUZZER 1 00:07:29.135 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:29.135 #undef SPDK_CONFIG_GOLANG 00:07:29.135 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:29.135 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:29.135 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:29.135 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:29.135 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:29.135 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:29.135 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:29.135 #define SPDK_CONFIG_IDXD 1 00:07:29.135 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:29.135 #undef SPDK_CONFIG_IPSEC_MB 00:07:29.135 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:29.135 #define SPDK_CONFIG_ISAL 1 00:07:29.135 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:29.135 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:29.135 #define SPDK_CONFIG_LIBDIR 00:07:29.135 #undef SPDK_CONFIG_LTO 00:07:29.135 #define SPDK_CONFIG_MAX_LCORES 128 00:07:29.135 #define SPDK_CONFIG_NVME_CUSE 1 00:07:29.135 #undef SPDK_CONFIG_OCF 00:07:29.135 #define SPDK_CONFIG_OCF_PATH 00:07:29.135 #define SPDK_CONFIG_OPENSSL_PATH 00:07:29.135 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:29.135 #define SPDK_CONFIG_PGO_DIR 00:07:29.135 #undef SPDK_CONFIG_PGO_USE 00:07:29.135 #define SPDK_CONFIG_PREFIX /usr/local 00:07:29.135 #undef SPDK_CONFIG_RAID5F 00:07:29.135 #undef SPDK_CONFIG_RBD 00:07:29.135 #define SPDK_CONFIG_RDMA 1 00:07:29.135 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:29.135 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:29.135 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:29.135 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:29.135 #undef SPDK_CONFIG_SHARED 00:07:29.135 #undef SPDK_CONFIG_SMA 00:07:29.135 #define SPDK_CONFIG_TESTS 1 00:07:29.135 #undef SPDK_CONFIG_TSAN 00:07:29.135 #define SPDK_CONFIG_UBLK 1 00:07:29.135 #define SPDK_CONFIG_UBSAN 1 00:07:29.135 #undef SPDK_CONFIG_UNIT_TESTS 00:07:29.135 #undef SPDK_CONFIG_URING 00:07:29.135 #define SPDK_CONFIG_URING_PATH 00:07:29.135 #undef SPDK_CONFIG_URING_ZNS 00:07:29.135 #undef SPDK_CONFIG_USDT 00:07:29.135 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:29.135 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:29.135 #define SPDK_CONFIG_VFIO_USER 1 00:07:29.135 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:29.135 #define SPDK_CONFIG_VHOST 1 00:07:29.135 #define SPDK_CONFIG_VIRTIO 1 00:07:29.135 #undef SPDK_CONFIG_VTUNE 00:07:29.135 #define SPDK_CONFIG_VTUNE_DIR 00:07:29.135 #define SPDK_CONFIG_WERROR 1 00:07:29.135 #define SPDK_CONFIG_WPDK_DIR 00:07:29.135 #undef SPDK_CONFIG_XNVME 00:07:29.135 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:29.135 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:29.136 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 330007 ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 330007 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.n2wlrQ 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.n2wlrQ/tests/vfio /tmp/spdk.n2wlrQ 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=954408960 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4330020864 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=53948342272 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742317568 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7793975296 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866448384 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342484992 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5980160 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870216704 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871158784 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=942080 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:29.137 * Looking for test storage... 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=53948342272 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:29.137 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10008567808 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.138 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:29.138 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:29.409 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:29.409 20:21:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:29.409 [2024-07-15 20:21:21.563091] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:29.409 [2024-07-15 20:21:21.563155] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330046 ] 00:07:29.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.409 [2024-07-15 20:21:21.634922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.409 [2024-07-15 20:21:21.705579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.669 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.669 INFO: Seed: 622450244 00:07:29.669 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:29.669 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:29.669 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:29.669 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.669 #2 INITED exec/s: 0 rss: 64Mb 00:07:29.669 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.669 This may also happen if the target rejected all inputs we tried so far 00:07:29.669 [2024-07-15 20:21:21.939528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:30.188 NEW_FUNC[1/658]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:30.188 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.188 #8 NEW cov: 10954 ft: 10923 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:30.448 #10 NEW cov: 10972 ft: 13587 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 2 ShuffleBytes-CrossOver- 00:07:30.448 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:30.448 #21 NEW cov: 10989 ft: 14665 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:30.707 #22 NEW cov: 10989 ft: 16431 corp: 5/25b lim: 6 exec/s: 22 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:30.976 #28 NEW cov: 10989 ft: 16870 corp: 6/31b lim: 6 exec/s: 28 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:07:31.237 #29 NEW cov: 10989 ft: 17467 corp: 7/37b lim: 6 exec/s: 29 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:07:31.237 #31 NEW cov: 10989 ft: 17712 corp: 8/43b lim: 6 exec/s: 31 rss: 74Mb L: 6/6 MS: 2 EraseBytes-CrossOver- 00:07:31.496 #32 NEW cov: 10996 ft: 17924 corp: 9/49b lim: 6 exec/s: 32 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:31.756 #33 NEW cov: 10996 ft: 17962 corp: 10/55b lim: 6 exec/s: 16 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:07:31.756 #33 DONE cov: 10996 ft: 17962 corp: 10/55b lim: 6 exec/s: 16 rss: 74Mb 00:07:31.756 Done 33 runs in 2 second(s) 00:07:31.756 [2024-07-15 20:21:24.016621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:32.014 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:32.014 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:32.015 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.015 20:21:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:32.015 [2024-07-15 20:21:24.293705] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:32.015 [2024-07-15 20:21:24.293770] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330582 ] 00:07:32.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.015 [2024-07-15 20:21:24.364142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.274 [2024-07-15 20:21:24.434782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.274 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.274 INFO: Seed: 3350452759 00:07:32.274 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:32.274 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:32.274 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:32.274 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.274 #2 INITED exec/s: 0 rss: 64Mb 00:07:32.274 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.274 This may also happen if the target rejected all inputs we tried so far 00:07:32.549 [2024-07-15 20:21:24.665075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:32.549 [2024-07-15 20:21:24.718472] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:32.549 [2024-07-15 20:21:24.718495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:32.549 [2024-07-15 20:21:24.718513] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:32.832 NEW_FUNC[1/660]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:32.832 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:32.832 #31 NEW cov: 10951 ft: 10907 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 4 ShuffleBytes-CrossOver-CrossOver-CrossOver- 00:07:33.091 [2024-07-15 20:21:25.214733] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.091 [2024-07-15 20:21:25.214766] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.091 [2024-07-15 20:21:25.214785] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.091 #37 NEW cov: 10973 ft: 13713 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:07:33.091 [2024-07-15 20:21:25.410605] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.091 [2024-07-15 20:21:25.410628] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.091 [2024-07-15 20:21:25.410646] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.350 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:33.351 #47 NEW cov: 10990 ft: 14916 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 ChangeByte-CopyPart-CrossOver-ChangeBit-CopyPart- 00:07:33.351 [2024-07-15 20:21:25.617253] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.351 [2024-07-15 20:21:25.617275] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.351 [2024-07-15 20:21:25.617292] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.610 #48 NEW cov: 10990 ft: 16174 corp: 5/17b lim: 4 exec/s: 48 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:33.610 [2024-07-15 20:21:25.812674] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.610 [2024-07-15 20:21:25.812701] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.610 [2024-07-15 20:21:25.812718] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.610 #49 NEW cov: 10990 ft: 16695 corp: 6/21b lim: 4 exec/s: 49 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:07:33.869 [2024-07-15 20:21:26.010151] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.869 [2024-07-15 20:21:26.010174] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.869 [2024-07-15 20:21:26.010191] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:33.869 #50 NEW cov: 10990 ft: 16971 corp: 7/25b lim: 4 exec/s: 50 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:33.869 [2024-07-15 20:21:26.204524] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.869 [2024-07-15 20:21:26.204546] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.869 [2024-07-15 20:21:26.204563] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.128 #51 NEW cov: 10990 ft: 17230 corp: 8/29b lim: 4 exec/s: 51 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:34.128 [2024-07-15 20:21:26.401976] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.128 [2024-07-15 20:21:26.401998] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.128 [2024-07-15 20:21:26.402016] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.386 #52 NEW cov: 10997 ft: 17348 corp: 9/33b lim: 4 exec/s: 52 rss: 74Mb L: 4/4 MS: 1 CrossOver- 00:07:34.386 [2024-07-15 20:21:26.619413] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.386 [2024-07-15 20:21:26.619435] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.386 [2024-07-15 20:21:26.619461] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.386 #54 NEW cov: 10997 ft: 17548 corp: 10/37b lim: 4 exec/s: 27 rss: 74Mb L: 4/4 MS: 2 EraseBytes-CrossOver- 00:07:34.386 #54 DONE cov: 10997 ft: 17548 corp: 10/37b lim: 4 exec/s: 27 rss: 74Mb 00:07:34.386 Done 54 runs in 2 second(s) 00:07:34.386 [2024-07-15 20:21:26.763632] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:34.645 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:34.645 20:21:27 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:34.904 [2024-07-15 20:21:27.047461] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:34.904 [2024-07-15 20:21:27.047538] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331117 ] 00:07:34.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.904 [2024-07-15 20:21:27.117592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.904 [2024-07-15 20:21:27.189045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.164 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.164 INFO: Seed: 1810489268 00:07:35.164 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:35.164 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:35.164 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.164 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.164 #2 INITED exec/s: 0 rss: 64Mb 00:07:35.164 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.164 This may also happen if the target rejected all inputs we tried so far 00:07:35.164 [2024-07-15 20:21:27.422660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:35.164 [2024-07-15 20:21:27.463932] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:35.680 NEW_FUNC[1/658]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:35.680 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:35.680 #61 NEW cov: 10931 ft: 10875 corp: 2/9b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 4 InsertByte-CMP-InsertByte-InsertByte- DE: "\000\000\000\001"- 00:07:35.680 [2024-07-15 20:21:27.959543] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:35.939 NEW_FUNC[1/1]: 0x17d6f70 in nvme_transport_qpair_submit_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:611 00:07:35.939 #62 NEW cov: 10949 ft: 13948 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CrossOver- 00:07:35.939 [2024-07-15 20:21:28.177861] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:35.939 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:35.939 #67 NEW cov: 10966 ft: 15395 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 5 ShuffleBytes-PersAutoDict-EraseBytes-InsertRepeatedBytes-CopyPart- DE: "\000\000\000\001"- 00:07:36.197 [2024-07-15 20:21:28.397806] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.197 #78 NEW cov: 10966 ft: 16124 corp: 5/33b lim: 8 exec/s: 78 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:36.456 [2024-07-15 20:21:28.606580] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.456 #79 NEW cov: 10966 ft: 16480 corp: 6/41b lim: 8 exec/s: 79 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:36.456 [2024-07-15 20:21:28.817539] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.714 #80 NEW cov: 10966 ft: 16582 corp: 7/49b lim: 8 exec/s: 80 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:36.714 [2024-07-15 20:21:29.026898] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.973 #86 NEW cov: 10966 ft: 16659 corp: 8/57b lim: 8 exec/s: 86 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:36.973 [2024-07-15 20:21:29.230861] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.232 #92 NEW cov: 10973 ft: 16892 corp: 9/65b lim: 8 exec/s: 92 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:37.233 [2024-07-15 20:21:29.438706] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.233 #97 NEW cov: 10973 ft: 17353 corp: 10/73b lim: 8 exec/s: 48 rss: 73Mb L: 8/8 MS: 5 CrossOver-CopyPart-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:07:37.233 #97 DONE cov: 10973 ft: 17353 corp: 10/73b lim: 8 exec/s: 48 rss: 73Mb 00:07:37.233 ###### Recommended dictionary. ###### 00:07:37.233 "\000\000\000\001" # Uses: 2 00:07:37.233 ###### End of recommended dictionary. ###### 00:07:37.233 Done 97 runs in 2 second(s) 00:07:37.233 [2024-07-15 20:21:29.581626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:37.491 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:37.491 20:21:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:37.491 [2024-07-15 20:21:29.849317] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:37.491 [2024-07-15 20:21:29.849384] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331517 ] 00:07:37.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.750 [2024-07-15 20:21:29.918898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.750 [2024-07-15 20:21:29.990090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.009 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.009 INFO: Seed: 322533406 00:07:38.009 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:38.009 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:38.009 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.009 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.009 #2 INITED exec/s: 0 rss: 64Mb 00:07:38.009 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.009 This may also happen if the target rejected all inputs we tried so far 00:07:38.009 [2024-07-15 20:21:30.229169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:38.527 NEW_FUNC[1/658]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:38.527 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:38.527 #150 NEW cov: 10946 ft: 10918 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 3 InsertRepeatedBytes-InsertRepeatedBytes-CopyPart- 00:07:38.787 NEW_FUNC[1/1]: 0x170d910 in nvme_pcie_ctrlr /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_pcie_internal.h:210 00:07:38.787 #151 NEW cov: 10961 ft: 14553 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:38.787 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:38.787 #157 NEW cov: 10981 ft: 15397 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:39.047 #158 NEW cov: 10981 ft: 16443 corp: 5/129b lim: 32 exec/s: 158 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000V"- 00:07:39.306 #159 NEW cov: 10981 ft: 16933 corp: 6/161b lim: 32 exec/s: 159 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:39.566 #165 NEW cov: 10981 ft: 17612 corp: 7/193b lim: 32 exec/s: 165 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:39.566 #166 NEW cov: 10981 ft: 17694 corp: 8/225b lim: 32 exec/s: 166 rss: 74Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\000\000V"- 00:07:39.825 #167 NEW cov: 10988 ft: 18042 corp: 9/257b lim: 32 exec/s: 167 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:40.084 #168 NEW cov: 10988 ft: 18136 corp: 10/289b lim: 32 exec/s: 84 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:40.084 #168 DONE cov: 10988 ft: 18136 corp: 10/289b lim: 32 exec/s: 84 rss: 74Mb 00:07:40.084 ###### Recommended dictionary. ###### 00:07:40.084 "\001\000\000V" # Uses: 1 00:07:40.084 ###### End of recommended dictionary. ###### 00:07:40.084 Done 168 runs in 2 second(s) 00:07:40.084 [2024-07-15 20:21:32.367644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:40.343 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:40.343 20:21:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:40.343 [2024-07-15 20:21:32.642902] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:40.343 [2024-07-15 20:21:32.642972] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331958 ] 00:07:40.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.343 [2024-07-15 20:21:32.714581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.603 [2024-07-15 20:21:32.786568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.603 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.603 INFO: Seed: 3124539795 00:07:40.863 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:40.863 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:40.863 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:40.863 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.863 #2 INITED exec/s: 0 rss: 65Mb 00:07:40.863 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.863 This may also happen if the target rejected all inputs we tried so far 00:07:40.863 [2024-07-15 20:21:33.027021] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:41.122 NEW_FUNC[1/659]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:41.122 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:41.122 #52 NEW cov: 10947 ft: 10911 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 5 ChangeByte-InsertRepeatedBytes-CrossOver-CopyPart-InsertRepeatedBytes- 00:07:41.381 #53 NEW cov: 10966 ft: 14126 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:41.639 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:41.639 #54 NEW cov: 10983 ft: 15136 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:41.898 #55 NEW cov: 10983 ft: 15303 corp: 5/129b lim: 32 exec/s: 55 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:41.898 #56 NEW cov: 10983 ft: 15363 corp: 6/161b lim: 32 exec/s: 56 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.156 #62 NEW cov: 10983 ft: 16298 corp: 7/193b lim: 32 exec/s: 62 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:42.414 #68 NEW cov: 10983 ft: 16390 corp: 8/225b lim: 32 exec/s: 68 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "i\000\000\000"- 00:07:42.673 #69 NEW cov: 10990 ft: 16764 corp: 9/257b lim: 32 exec/s: 69 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.673 #75 NEW cov: 10990 ft: 16979 corp: 10/289b lim: 32 exec/s: 37 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:42.673 #75 DONE cov: 10990 ft: 16979 corp: 10/289b lim: 32 exec/s: 37 rss: 73Mb 00:07:42.673 ###### Recommended dictionary. ###### 00:07:42.673 "i\000\000\000" # Uses: 0 00:07:42.673 ###### End of recommended dictionary. ###### 00:07:42.673 Done 75 runs in 2 second(s) 00:07:42.673 [2024-07-15 20:21:35.037629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:42.934 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:42.934 20:21:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:43.192 [2024-07-15 20:21:35.317799] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:43.192 [2024-07-15 20:21:35.317889] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332494 ] 00:07:43.192 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.192 [2024-07-15 20:21:35.391506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.192 [2024-07-15 20:21:35.461765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.451 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.451 INFO: Seed: 1493560734 00:07:43.451 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:43.451 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:43.451 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:43.451 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.451 #2 INITED exec/s: 0 rss: 65Mb 00:07:43.451 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.451 This may also happen if the target rejected all inputs we tried so far 00:07:43.451 [2024-07-15 20:21:35.695151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:43.451 [2024-07-15 20:21:35.746473] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:43.451 [2024-07-15 20:21:35.746545] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:43.970 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:43.970 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:43.970 #16 NEW cov: 10958 ft: 10862 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 4 CopyPart-ShuffleBytes-InsertRepeatedBytes-CopyPart- 00:07:43.970 [2024-07-15 20:21:36.269932] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:43.970 [2024-07-15 20:21:36.269974] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.228 #17 NEW cov: 10972 ft: 13488 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:44.228 [2024-07-15 20:21:36.480688] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.228 [2024-07-15 20:21:36.480718] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.228 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:44.228 #18 NEW cov: 10992 ft: 14499 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:07:44.487 [2024-07-15 20:21:36.695072] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.487 [2024-07-15 20:21:36.695101] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.487 #29 NEW cov: 10992 ft: 14776 corp: 5/53b lim: 13 exec/s: 29 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:07:44.746 [2024-07-15 20:21:36.905266] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.746 [2024-07-15 20:21:36.905296] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.746 #35 NEW cov: 10992 ft: 15469 corp: 6/66b lim: 13 exec/s: 35 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:07:44.746 [2024-07-15 20:21:37.117511] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.746 [2024-07-15 20:21:37.117540] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.006 #46 NEW cov: 10992 ft: 15842 corp: 7/79b lim: 13 exec/s: 46 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:07:45.006 [2024-07-15 20:21:37.333716] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.006 [2024-07-15 20:21:37.333746] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.265 #57 NEW cov: 10992 ft: 15927 corp: 8/92b lim: 13 exec/s: 57 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:07:45.265 [2024-07-15 20:21:37.548855] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.265 [2024-07-15 20:21:37.548884] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.523 #58 NEW cov: 10999 ft: 16012 corp: 9/105b lim: 13 exec/s: 29 rss: 73Mb L: 13/13 MS: 1 CrossOver- 00:07:45.523 #58 DONE cov: 10999 ft: 16012 corp: 9/105b lim: 13 exec/s: 29 rss: 73Mb 00:07:45.523 Done 58 runs in 2 second(s) 00:07:45.523 [2024-07-15 20:21:37.695643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:45.782 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:45.782 20:21:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:45.782 [2024-07-15 20:21:37.971000] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:45.782 [2024-07-15 20:21:37.971069] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333024 ] 00:07:45.782 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.782 [2024-07-15 20:21:38.041215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.782 [2024-07-15 20:21:38.111723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.052 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.052 INFO: Seed: 4148560452 00:07:46.052 INFO: Loaded 1 modules (355122 inline 8-bit counters): 355122 [0x296dc8c, 0x29c47be), 00:07:46.052 INFO: Loaded 1 PC tables (355122 PCs): 355122 [0x29c47c0,0x2f2fae0), 00:07:46.052 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.052 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.052 #2 INITED exec/s: 0 rss: 64Mb 00:07:46.052 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.052 This may also happen if the target rejected all inputs we tried so far 00:07:46.052 [2024-07-15 20:21:38.349515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:46.052 [2024-07-15 20:21:38.373485] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.052 [2024-07-15 20:21:38.373514] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.569 NEW_FUNC[1/660]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:46.569 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:46.569 #46 NEW cov: 10944 ft: 10890 corp: 2/10b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 4 InsertRepeatedBytes-ChangeBit-InsertByte-InsertByte- 00:07:46.569 [2024-07-15 20:21:38.795265] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.569 [2024-07-15 20:21:38.795303] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.569 #47 NEW cov: 10960 ft: 13936 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:46.569 [2024-07-15 20:21:38.920216] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.569 [2024-07-15 20:21:38.920251] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.828 #48 NEW cov: 10963 ft: 14716 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:07:46.828 [2024-07-15 20:21:39.034941] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.828 [2024-07-15 20:21:39.034973] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.828 #54 NEW cov: 10963 ft: 14883 corp: 5/37b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:46.828 [2024-07-15 20:21:39.149782] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.828 [2024-07-15 20:21:39.149815] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.085 NEW_FUNC[1/1]: 0x1a4bb20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:07:47.085 #55 NEW cov: 10980 ft: 15658 corp: 6/46b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:47.085 [2024-07-15 20:21:39.276805] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.085 [2024-07-15 20:21:39.276842] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.085 #56 NEW cov: 10980 ft: 15942 corp: 7/55b lim: 9 exec/s: 56 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:47.085 [2024-07-15 20:21:39.393866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.085 [2024-07-15 20:21:39.393899] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.085 #57 NEW cov: 10980 ft: 16035 corp: 8/64b lim: 9 exec/s: 57 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:47.342 [2024-07-15 20:21:39.506812] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.342 [2024-07-15 20:21:39.506845] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.342 #58 NEW cov: 10980 ft: 16396 corp: 9/73b lim: 9 exec/s: 58 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:47.342 [2024-07-15 20:21:39.630722] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.343 [2024-07-15 20:21:39.630754] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.343 #59 NEW cov: 10980 ft: 16760 corp: 10/82b lim: 9 exec/s: 59 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:47.601 [2024-07-15 20:21:39.744539] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.601 [2024-07-15 20:21:39.744571] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.601 #60 NEW cov: 10980 ft: 16787 corp: 11/91b lim: 9 exec/s: 60 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:47.601 [2024-07-15 20:21:39.868281] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.601 [2024-07-15 20:21:39.868313] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.601 #62 NEW cov: 10980 ft: 17399 corp: 12/100b lim: 9 exec/s: 62 rss: 73Mb L: 9/9 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:47.859 [2024-07-15 20:21:40.025537] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.859 [2024-07-15 20:21:40.025568] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.859 #63 NEW cov: 10987 ft: 17600 corp: 13/109b lim: 9 exec/s: 63 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:07:47.859 [2024-07-15 20:21:40.221018] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.859 [2024-07-15 20:21:40.221053] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.118 #64 pulse cov: 10987 ft: 17689 corp: 13/109b lim: 9 exec/s: 32 rss: 73Mb 00:07:48.118 #64 NEW cov: 10987 ft: 17689 corp: 14/118b lim: 9 exec/s: 32 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:48.118 #64 DONE cov: 10987 ft: 17689 corp: 14/118b lim: 9 exec/s: 32 rss: 73Mb 00:07:48.118 Done 64 runs in 2 second(s) 00:07:48.118 [2024-07-15 20:21:40.359631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:48.376 20:21:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:48.376 20:21:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:48.376 20:21:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.376 20:21:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:48.376 00:07:48.376 real 0m19.313s 00:07:48.376 user 0m27.366s 00:07:48.376 sys 0m1.736s 00:07:48.376 20:21:40 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.376 20:21:40 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:48.376 ************************************ 00:07:48.376 END TEST vfio_llvm_fuzz 00:07:48.376 ************************************ 00:07:48.376 20:21:40 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:07:48.376 20:21:40 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:48.376 00:07:48.376 real 1m23.761s 00:07:48.376 user 2m7.953s 00:07:48.376 sys 0m8.935s 00:07:48.376 20:21:40 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.376 20:21:40 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:48.376 ************************************ 00:07:48.376 END TEST llvm_fuzz 00:07:48.376 ************************************ 00:07:48.376 20:21:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:48.376 20:21:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:07:48.376 20:21:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:07:48.376 20:21:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:07:48.376 20:21:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.376 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:48.376 20:21:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:07:48.376 20:21:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:48.376 20:21:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:48.376 20:21:40 -- common/autotest_common.sh@10 -- # set +x 00:07:54.948 INFO: APP EXITING 00:07:54.948 INFO: killing all VMs 00:07:54.948 INFO: killing vhost app 00:07:54.948 INFO: EXIT DONE 00:07:57.477 Waiting for block devices as requested 00:07:57.477 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:57.477 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:57.477 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:57.477 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:57.477 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:57.736 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:57.736 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:57.736 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:57.998 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:57.998 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:57.998 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:58.257 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:58.257 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:58.257 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:58.548 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:58.548 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:58.548 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:01.899 Cleaning 00:08:01.899 Removing: /dev/shm/spdk_tgt_trace.pid297316 00:08:01.899 Removing: /var/run/dpdk/spdk_pid294844 00:08:01.899 Removing: /var/run/dpdk/spdk_pid296101 00:08:01.899 Removing: /var/run/dpdk/spdk_pid297316 00:08:01.899 Removing: /var/run/dpdk/spdk_pid298010 00:08:01.899 Removing: /var/run/dpdk/spdk_pid298874 00:08:01.899 Removing: /var/run/dpdk/spdk_pid299152 00:08:01.899 Removing: /var/run/dpdk/spdk_pid300259 00:08:01.899 Removing: /var/run/dpdk/spdk_pid300293 00:08:01.899 Removing: /var/run/dpdk/spdk_pid300688 00:08:01.899 Removing: /var/run/dpdk/spdk_pid301006 00:08:01.899 Removing: /var/run/dpdk/spdk_pid301328 00:08:01.899 Removing: /var/run/dpdk/spdk_pid301671 00:08:01.899 Removing: /var/run/dpdk/spdk_pid301990 00:08:01.899 Removing: /var/run/dpdk/spdk_pid302275 00:08:01.899 Removing: /var/run/dpdk/spdk_pid302566 00:08:01.899 Removing: /var/run/dpdk/spdk_pid302875 00:08:01.899 Removing: /var/run/dpdk/spdk_pid303724 00:08:01.899 Removing: /var/run/dpdk/spdk_pid306691 00:08:01.899 Removing: /var/run/dpdk/spdk_pid307184 00:08:01.899 Removing: /var/run/dpdk/spdk_pid307482 00:08:01.899 Removing: /var/run/dpdk/spdk_pid307513 00:08:01.899 Removing: /var/run/dpdk/spdk_pid308096 00:08:01.899 Removing: /var/run/dpdk/spdk_pid308330 00:08:01.899 Removing: /var/run/dpdk/spdk_pid308901 00:08:01.899 Removing: /var/run/dpdk/spdk_pid309084 00:08:01.899 Removing: /var/run/dpdk/spdk_pid309307 00:08:01.899 Removing: /var/run/dpdk/spdk_pid309476 00:08:01.899 Removing: /var/run/dpdk/spdk_pid309768 00:08:01.899 Removing: /var/run/dpdk/spdk_pid309788 00:08:01.899 Removing: /var/run/dpdk/spdk_pid310409 00:08:01.899 Removing: /var/run/dpdk/spdk_pid310668 00:08:01.899 Removing: /var/run/dpdk/spdk_pid310844 00:08:01.899 Removing: /var/run/dpdk/spdk_pid311051 00:08:01.899 Removing: /var/run/dpdk/spdk_pid311355 00:08:01.899 Removing: /var/run/dpdk/spdk_pid311382 00:08:01.899 Removing: /var/run/dpdk/spdk_pid311696 00:08:01.899 Removing: /var/run/dpdk/spdk_pid311934 00:08:01.899 Removing: /var/run/dpdk/spdk_pid312159 00:08:01.899 Removing: /var/run/dpdk/spdk_pid312390 00:08:01.899 Removing: /var/run/dpdk/spdk_pid312613 00:08:01.899 Removing: /var/run/dpdk/spdk_pid312874 00:08:01.899 Removing: /var/run/dpdk/spdk_pid313155 00:08:01.899 Removing: /var/run/dpdk/spdk_pid313442 00:08:01.899 Removing: /var/run/dpdk/spdk_pid313726 00:08:01.899 Removing: /var/run/dpdk/spdk_pid314005 00:08:01.899 Removing: /var/run/dpdk/spdk_pid314290 00:08:01.899 Removing: /var/run/dpdk/spdk_pid314575 00:08:01.899 Removing: /var/run/dpdk/spdk_pid314866 00:08:01.899 Removing: /var/run/dpdk/spdk_pid315150 00:08:01.899 Removing: /var/run/dpdk/spdk_pid315392 00:08:01.899 Removing: /var/run/dpdk/spdk_pid315608 00:08:01.899 Removing: /var/run/dpdk/spdk_pid315841 00:08:01.899 Removing: /var/run/dpdk/spdk_pid316072 00:08:01.899 Removing: /var/run/dpdk/spdk_pid316331 00:08:01.899 Removing: /var/run/dpdk/spdk_pid316614 00:08:01.899 Removing: /var/run/dpdk/spdk_pid316905 00:08:01.899 Removing: /var/run/dpdk/spdk_pid317147 00:08:01.899 Removing: /var/run/dpdk/spdk_pid317557 00:08:01.899 Removing: /var/run/dpdk/spdk_pid318033 00:08:01.899 Removing: /var/run/dpdk/spdk_pid318568 00:08:01.899 Removing: /var/run/dpdk/spdk_pid319104 00:08:01.899 Removing: /var/run/dpdk/spdk_pid319404 00:08:01.899 Removing: /var/run/dpdk/spdk_pid319930 00:08:01.899 Removing: /var/run/dpdk/spdk_pid320433 00:08:01.899 Removing: /var/run/dpdk/spdk_pid320745 00:08:01.899 Removing: /var/run/dpdk/spdk_pid321285 00:08:01.899 Removing: /var/run/dpdk/spdk_pid321736 00:08:01.899 Removing: /var/run/dpdk/spdk_pid322104 00:08:01.899 Removing: /var/run/dpdk/spdk_pid322633 00:08:01.899 Removing: /var/run/dpdk/spdk_pid323064 00:08:01.899 Removing: /var/run/dpdk/spdk_pid323462 00:08:01.899 Removing: /var/run/dpdk/spdk_pid323997 00:08:01.899 Removing: /var/run/dpdk/spdk_pid324409 00:08:01.899 Removing: /var/run/dpdk/spdk_pid324818 00:08:01.899 Removing: /var/run/dpdk/spdk_pid325352 00:08:01.899 Removing: /var/run/dpdk/spdk_pid325831 00:08:01.899 Removing: /var/run/dpdk/spdk_pid326297 00:08:01.899 Removing: /var/run/dpdk/spdk_pid327264 00:08:01.899 Removing: /var/run/dpdk/spdk_pid327730 00:08:01.899 Removing: /var/run/dpdk/spdk_pid328089 00:08:01.899 Removing: /var/run/dpdk/spdk_pid328618 00:08:01.899 Removing: /var/run/dpdk/spdk_pid329072 00:08:01.899 Removing: /var/run/dpdk/spdk_pid329439 00:08:01.899 Removing: /var/run/dpdk/spdk_pid330046 00:08:01.899 Removing: /var/run/dpdk/spdk_pid330582 00:08:01.899 Removing: /var/run/dpdk/spdk_pid331117 00:08:01.899 Removing: /var/run/dpdk/spdk_pid331517 00:08:01.899 Removing: /var/run/dpdk/spdk_pid331958 00:08:01.899 Removing: /var/run/dpdk/spdk_pid332494 00:08:01.899 Removing: /var/run/dpdk/spdk_pid333024 00:08:01.899 Clean 00:08:02.159 20:21:54 -- common/autotest_common.sh@1451 -- # return 0 00:08:02.159 20:21:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:02.159 20:21:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.159 20:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.159 20:21:54 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:02.159 20:21:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.159 20:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.159 20:21:54 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:02.159 20:21:54 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:02.159 20:21:54 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:02.159 20:21:54 -- spdk/autotest.sh@391 -- # hash lcov 00:08:02.159 20:21:54 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:02.159 20:21:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:02.159 20:21:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:02.159 20:21:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.159 20:21:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.159 20:21:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.159 20:21:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.159 20:21:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.159 20:21:54 -- paths/export.sh@5 -- $ export PATH 00:08:02.159 20:21:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.159 20:21:54 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:02.159 20:21:54 -- common/autobuild_common.sh@444 -- $ date +%s 00:08:02.159 20:21:54 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721067714.XXXXXX 00:08:02.159 20:21:54 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721067714.6cl0NK 00:08:02.159 20:21:54 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:08:02.159 20:21:54 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:08:02.159 20:21:54 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:02.159 20:21:54 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:02.159 20:21:54 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:02.159 20:21:54 -- common/autobuild_common.sh@460 -- $ get_config_params 00:08:02.159 20:21:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:02.159 20:21:54 -- common/autotest_common.sh@10 -- $ set +x 00:08:02.419 20:21:54 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:02.419 20:21:54 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:08:02.419 20:21:54 -- pm/common@17 -- $ local monitor 00:08:02.419 20:21:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:02.419 20:21:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:02.419 20:21:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:02.419 20:21:54 -- pm/common@21 -- $ date +%s 00:08:02.419 20:21:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:02.419 20:21:54 -- pm/common@21 -- $ date +%s 00:08:02.419 20:21:54 -- pm/common@25 -- $ sleep 1 00:08:02.419 20:21:54 -- pm/common@21 -- $ date +%s 00:08:02.419 20:21:54 -- pm/common@21 -- $ date +%s 00:08:02.419 20:21:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721067714 00:08:02.419 20:21:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721067714 00:08:02.419 20:21:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721067714 00:08:02.419 20:21:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721067714 00:08:02.419 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721067714_collect-vmstat.pm.log 00:08:02.419 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721067714_collect-cpu-load.pm.log 00:08:02.419 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721067714_collect-cpu-temp.pm.log 00:08:02.419 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721067714_collect-bmc-pm.bmc.pm.log 00:08:03.357 20:21:55 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:08:03.357 20:21:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:08:03.357 20:21:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:03.357 20:21:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:03.357 20:21:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:03.357 20:21:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:03.357 20:21:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:03.357 20:21:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:03.357 20:21:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:03.357 20:21:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:03.357 20:21:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:03.357 20:21:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:03.357 20:21:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:03.357 20:21:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.357 20:21:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:03.357 20:21:55 -- pm/common@44 -- $ pid=339953 00:08:03.357 20:21:55 -- pm/common@50 -- $ kill -TERM 339953 00:08:03.357 20:21:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.357 20:21:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:03.357 20:21:55 -- pm/common@44 -- $ pid=339956 00:08:03.357 20:21:55 -- pm/common@50 -- $ kill -TERM 339956 00:08:03.357 20:21:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.357 20:21:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:03.357 20:21:55 -- pm/common@44 -- $ pid=339958 00:08:03.357 20:21:55 -- pm/common@50 -- $ kill -TERM 339958 00:08:03.358 20:21:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.358 20:21:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:03.358 20:21:55 -- pm/common@44 -- $ pid=340011 00:08:03.358 20:21:55 -- pm/common@50 -- $ sudo -E kill -TERM 340011 00:08:03.358 + [[ -n 187868 ]] 00:08:03.358 + sudo kill 187868 00:08:03.368 [Pipeline] } 00:08:03.390 [Pipeline] // stage 00:08:03.396 [Pipeline] } 00:08:03.417 [Pipeline] // timeout 00:08:03.423 [Pipeline] } 00:08:03.442 [Pipeline] // catchError 00:08:03.450 [Pipeline] } 00:08:03.469 [Pipeline] // wrap 00:08:03.476 [Pipeline] } 00:08:03.493 [Pipeline] // catchError 00:08:03.504 [Pipeline] stage 00:08:03.507 [Pipeline] { (Epilogue) 00:08:03.523 [Pipeline] catchError 00:08:03.525 [Pipeline] { 00:08:03.540 [Pipeline] echo 00:08:03.541 Cleanup processes 00:08:03.546 [Pipeline] sh 00:08:03.827 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:03.827 250747 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067378 00:08:03.827 250778 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067378 00:08:03.827 340217 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:03.827 341081 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:03.842 [Pipeline] sh 00:08:04.129 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.129 ++ grep -v 'sudo pgrep' 00:08:04.129 ++ awk '{print $1}' 00:08:04.129 + sudo kill -9 250747 250778 340217 00:08:04.142 [Pipeline] sh 00:08:04.429 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:04.429 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:04.429 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:05.820 [Pipeline] sh 00:08:06.107 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:06.107 Artifacts sizes are good 00:08:06.131 [Pipeline] archiveArtifacts 00:08:06.141 Archiving artifacts 00:08:06.199 [Pipeline] sh 00:08:06.487 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:06.505 [Pipeline] cleanWs 00:08:06.515 [WS-CLEANUP] Deleting project workspace... 00:08:06.515 [WS-CLEANUP] Deferred wipeout is used... 00:08:06.523 [WS-CLEANUP] done 00:08:06.525 [Pipeline] } 00:08:06.547 [Pipeline] // catchError 00:08:06.559 [Pipeline] sh 00:08:06.841 + logger -p user.info -t JENKINS-CI 00:08:06.851 [Pipeline] } 00:08:06.866 [Pipeline] // stage 00:08:06.871 [Pipeline] } 00:08:06.893 [Pipeline] // node 00:08:06.898 [Pipeline] End of Pipeline 00:08:06.944 Finished: SUCCESS